00:00:00.000 Started by upstream project "autotest-nightly" build number 4315 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3678 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.094 The recommended git tool is: git 00:00:00.094 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.112 Fetching changes from the remote Git repository 00:00:00.118 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.136 Using shallow fetch with depth 1 00:00:00.136 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.136 > git --version # timeout=10 00:00:00.151 > git --version # 'git version 2.39.2' 00:00:00.151 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.178 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.178 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.892 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.903 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.915 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.915 > git config core.sparsecheckout # timeout=10 00:00:03.926 > git read-tree -mu HEAD # timeout=10 00:00:03.942 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.967 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.968 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.075 [Pipeline] Start of Pipeline 00:00:04.091 [Pipeline] library 00:00:04.093 Loading library shm_lib@master 00:00:04.093 Library shm_lib@master is cached. Copying from home. 00:00:04.114 [Pipeline] node 00:00:04.127 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.129 [Pipeline] { 00:00:04.141 [Pipeline] catchError 00:00:04.143 [Pipeline] { 00:00:04.159 [Pipeline] wrap 00:00:04.171 [Pipeline] { 00:00:04.183 [Pipeline] stage 00:00:04.186 [Pipeline] { (Prologue) 00:00:04.211 [Pipeline] echo 00:00:04.213 Node: VM-host-WFP7 00:00:04.222 [Pipeline] cleanWs 00:00:04.237 [WS-CLEANUP] Deleting project workspace... 00:00:04.237 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.244 [WS-CLEANUP] done 00:00:04.542 [Pipeline] setCustomBuildProperty 00:00:04.610 [Pipeline] httpRequest 00:00:05.145 [Pipeline] echo 00:00:05.146 Sorcerer 10.211.164.20 is alive 00:00:05.158 [Pipeline] retry 00:00:05.160 [Pipeline] { 00:00:05.173 [Pipeline] httpRequest 00:00:05.178 HttpMethod: GET 00:00:05.179 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.179 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.180 Response Code: HTTP/1.1 200 OK 00:00:05.181 Success: Status code 200 is in the accepted range: 200,404 00:00:05.181 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.502 [Pipeline] } 00:00:05.517 [Pipeline] // retry 00:00:05.525 [Pipeline] sh 00:00:05.816 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.832 [Pipeline] httpRequest 00:00:06.148 [Pipeline] echo 00:00:06.150 Sorcerer 10.211.164.20 is alive 00:00:06.157 [Pipeline] retry 00:00:06.159 [Pipeline] { 00:00:06.171 [Pipeline] httpRequest 00:00:06.175 HttpMethod: GET 00:00:06.175 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:06.176 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:06.182 Response Code: HTTP/1.1 200 OK 00:00:06.183 Success: Status code 200 is in the accepted range: 200,404 00:00:06.183 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:25.030 [Pipeline] } 00:01:25.049 [Pipeline] // retry 00:01:25.057 [Pipeline] sh 00:01:25.348 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:27.906 [Pipeline] sh 00:01:28.190 + git -C spdk log --oneline -n5 00:01:28.190 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:28.191 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:28.191 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:28.191 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:28.191 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:28.213 [Pipeline] writeFile 00:01:28.230 [Pipeline] sh 00:01:28.514 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:28.526 [Pipeline] sh 00:01:28.811 + cat autorun-spdk.conf 00:01:28.811 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.811 SPDK_RUN_ASAN=1 00:01:28.811 SPDK_RUN_UBSAN=1 00:01:28.811 SPDK_TEST_RAID=1 00:01:28.811 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.818 RUN_NIGHTLY=1 00:01:28.821 [Pipeline] } 00:01:28.839 [Pipeline] // stage 00:01:28.860 [Pipeline] stage 00:01:28.862 [Pipeline] { (Run VM) 00:01:28.877 [Pipeline] sh 00:01:29.205 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:29.205 + echo 'Start stage prepare_nvme.sh' 00:01:29.206 Start stage prepare_nvme.sh 00:01:29.206 + [[ -n 5 ]] 00:01:29.206 + disk_prefix=ex5 00:01:29.206 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:29.206 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:29.206 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:29.206 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.206 ++ SPDK_RUN_ASAN=1 00:01:29.206 ++ SPDK_RUN_UBSAN=1 00:01:29.206 ++ SPDK_TEST_RAID=1 00:01:29.206 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.206 ++ RUN_NIGHTLY=1 00:01:29.206 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:29.206 + nvme_files=() 00:01:29.206 + declare -A nvme_files 00:01:29.206 + backend_dir=/var/lib/libvirt/images/backends 00:01:29.206 + nvme_files['nvme.img']=5G 00:01:29.206 + nvme_files['nvme-cmb.img']=5G 00:01:29.206 + nvme_files['nvme-multi0.img']=4G 00:01:29.206 + nvme_files['nvme-multi1.img']=4G 00:01:29.206 + nvme_files['nvme-multi2.img']=4G 00:01:29.206 + nvme_files['nvme-openstack.img']=8G 00:01:29.206 + nvme_files['nvme-zns.img']=5G 00:01:29.206 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:29.206 + (( SPDK_TEST_FTL == 1 )) 00:01:29.206 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:29.206 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:29.206 + for nvme in "${!nvme_files[@]}" 00:01:29.206 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:29.206 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.206 + for nvme in "${!nvme_files[@]}" 00:01:29.206 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:29.206 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.206 + for nvme in "${!nvme_files[@]}" 00:01:29.206 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:29.206 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:29.206 + for nvme in "${!nvme_files[@]}" 00:01:29.206 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:29.206 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.206 + for nvme in "${!nvme_files[@]}" 00:01:29.206 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:29.206 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.206 + for nvme in "${!nvme_files[@]}" 00:01:29.206 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:29.206 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.206 + for nvme in "${!nvme_files[@]}" 00:01:29.206 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:29.206 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.206 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:29.232 + echo 'End stage prepare_nvme.sh' 00:01:29.232 End stage prepare_nvme.sh 00:01:29.237 [Pipeline] sh 00:01:29.521 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:29.521 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:29.521 00:01:29.521 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:29.521 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:29.521 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:29.521 HELP=0 00:01:29.521 DRY_RUN=0 00:01:29.521 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:29.521 NVME_DISKS_TYPE=nvme,nvme, 00:01:29.521 NVME_AUTO_CREATE=0 00:01:29.521 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:29.521 NVME_CMB=,, 00:01:29.521 NVME_PMR=,, 00:01:29.521 NVME_ZNS=,, 00:01:29.521 NVME_MS=,, 00:01:29.521 NVME_FDP=,, 00:01:29.521 SPDK_VAGRANT_DISTRO=fedora39 00:01:29.521 SPDK_VAGRANT_VMCPU=10 00:01:29.521 SPDK_VAGRANT_VMRAM=12288 00:01:29.521 SPDK_VAGRANT_PROVIDER=libvirt 00:01:29.521 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:29.521 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:29.521 SPDK_OPENSTACK_NETWORK=0 00:01:29.521 VAGRANT_PACKAGE_BOX=0 00:01:29.521 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:29.521 FORCE_DISTRO=true 00:01:29.521 VAGRANT_BOX_VERSION= 00:01:29.521 EXTRA_VAGRANTFILES= 00:01:29.521 NIC_MODEL=virtio 00:01:29.521 00:01:29.521 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:29.521 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:31.432 Bringing machine 'default' up with 'libvirt' provider... 00:01:32.002 ==> default: Creating image (snapshot of base box volume). 00:01:32.002 ==> default: Creating domain with the following settings... 00:01:32.002 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732865601_ecfaff6a75b19a2fb999 00:01:32.002 ==> default: -- Domain type: kvm 00:01:32.002 ==> default: -- Cpus: 10 00:01:32.002 ==> default: -- Feature: acpi 00:01:32.002 ==> default: -- Feature: apic 00:01:32.002 ==> default: -- Feature: pae 00:01:32.002 ==> default: -- Memory: 12288M 00:01:32.002 ==> default: -- Memory Backing: hugepages: 00:01:32.002 ==> default: -- Management MAC: 00:01:32.002 ==> default: -- Loader: 00:01:32.002 ==> default: -- Nvram: 00:01:32.002 ==> default: -- Base box: spdk/fedora39 00:01:32.002 ==> default: -- Storage pool: default 00:01:32.002 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732865601_ecfaff6a75b19a2fb999.img (20G) 00:01:32.002 ==> default: -- Volume Cache: default 00:01:32.002 ==> default: -- Kernel: 00:01:32.002 ==> default: -- Initrd: 00:01:32.002 ==> default: -- Graphics Type: vnc 00:01:32.002 ==> default: -- Graphics Port: -1 00:01:32.002 ==> default: -- Graphics IP: 127.0.0.1 00:01:32.002 ==> default: -- Graphics Password: Not defined 00:01:32.002 ==> default: -- Video Type: cirrus 00:01:32.002 ==> default: -- Video VRAM: 9216 00:01:32.002 ==> default: -- Sound Type: 00:01:32.002 ==> default: -- Keymap: en-us 00:01:32.003 ==> default: -- TPM Path: 00:01:32.003 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:32.003 ==> default: -- Command line args: 00:01:32.003 ==> default: -> value=-device, 00:01:32.003 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:32.003 ==> default: -> value=-drive, 00:01:32.003 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:32.003 ==> default: -> value=-device, 00:01:32.003 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.003 ==> default: -> value=-device, 00:01:32.003 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:32.003 ==> default: -> value=-drive, 00:01:32.003 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:32.003 ==> default: -> value=-device, 00:01:32.003 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.003 ==> default: -> value=-drive, 00:01:32.003 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:32.003 ==> default: -> value=-device, 00:01:32.003 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.003 ==> default: -> value=-drive, 00:01:32.003 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:32.003 ==> default: -> value=-device, 00:01:32.003 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.263 ==> default: Creating shared folders metadata... 00:01:32.263 ==> default: Starting domain. 00:01:33.647 ==> default: Waiting for domain to get an IP address... 00:01:51.748 ==> default: Waiting for SSH to become available... 00:01:51.748 ==> default: Configuring and enabling network interfaces... 00:01:57.018 default: SSH address: 192.168.121.90:22 00:01:57.018 default: SSH username: vagrant 00:01:57.018 default: SSH auth method: private key 00:01:58.925 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:07.052 ==> default: Mounting SSHFS shared folder... 00:02:09.594 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:09.594 ==> default: Checking Mount.. 00:02:11.503 ==> default: Folder Successfully Mounted! 00:02:11.503 ==> default: Running provisioner: file... 00:02:12.442 default: ~/.gitconfig => .gitconfig 00:02:13.039 00:02:13.039 SUCCESS! 00:02:13.039 00:02:13.039 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:13.039 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:13.039 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:13.039 00:02:13.049 [Pipeline] } 00:02:13.065 [Pipeline] // stage 00:02:13.074 [Pipeline] dir 00:02:13.075 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:13.076 [Pipeline] { 00:02:13.089 [Pipeline] catchError 00:02:13.091 [Pipeline] { 00:02:13.105 [Pipeline] sh 00:02:13.389 + vagrant ssh-config --host vagrant 00:02:13.389 + sed -ne /^Host/,$p 00:02:13.389 + tee ssh_conf 00:02:15.937 Host vagrant 00:02:15.937 HostName 192.168.121.90 00:02:15.937 User vagrant 00:02:15.938 Port 22 00:02:15.938 UserKnownHostsFile /dev/null 00:02:15.938 StrictHostKeyChecking no 00:02:15.938 PasswordAuthentication no 00:02:15.938 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:15.938 IdentitiesOnly yes 00:02:15.938 LogLevel FATAL 00:02:15.938 ForwardAgent yes 00:02:15.938 ForwardX11 yes 00:02:15.938 00:02:15.952 [Pipeline] withEnv 00:02:15.954 [Pipeline] { 00:02:15.967 [Pipeline] sh 00:02:16.251 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:16.252 source /etc/os-release 00:02:16.252 [[ -e /image.version ]] && img=$(< /image.version) 00:02:16.252 # Minimal, systemd-like check. 00:02:16.252 if [[ -e /.dockerenv ]]; then 00:02:16.252 # Clear garbage from the node's name: 00:02:16.252 # agt-er_autotest_547-896 -> autotest_547-896 00:02:16.252 # $HOSTNAME is the actual container id 00:02:16.252 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:16.252 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:16.252 # We can assume this is a mount from a host where container is running, 00:02:16.252 # so fetch its hostname to easily identify the target swarm worker. 00:02:16.252 container="$(< /etc/hostname) ($agent)" 00:02:16.252 else 00:02:16.252 # Fallback 00:02:16.252 container=$agent 00:02:16.252 fi 00:02:16.252 fi 00:02:16.252 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:16.252 00:02:16.526 [Pipeline] } 00:02:16.543 [Pipeline] // withEnv 00:02:16.551 [Pipeline] setCustomBuildProperty 00:02:16.566 [Pipeline] stage 00:02:16.568 [Pipeline] { (Tests) 00:02:16.585 [Pipeline] sh 00:02:16.869 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:17.144 [Pipeline] sh 00:02:17.428 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:17.705 [Pipeline] timeout 00:02:17.706 Timeout set to expire in 1 hr 30 min 00:02:17.708 [Pipeline] { 00:02:17.723 [Pipeline] sh 00:02:18.008 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:18.578 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:18.592 [Pipeline] sh 00:02:18.875 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:19.148 [Pipeline] sh 00:02:19.430 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:19.708 [Pipeline] sh 00:02:19.992 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:20.251 ++ readlink -f spdk_repo 00:02:20.251 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:20.251 + [[ -n /home/vagrant/spdk_repo ]] 00:02:20.251 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:20.251 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:20.251 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:20.251 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:20.251 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:20.251 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:20.251 + cd /home/vagrant/spdk_repo 00:02:20.251 + source /etc/os-release 00:02:20.251 ++ NAME='Fedora Linux' 00:02:20.251 ++ VERSION='39 (Cloud Edition)' 00:02:20.251 ++ ID=fedora 00:02:20.251 ++ VERSION_ID=39 00:02:20.251 ++ VERSION_CODENAME= 00:02:20.251 ++ PLATFORM_ID=platform:f39 00:02:20.251 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:20.251 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:20.251 ++ LOGO=fedora-logo-icon 00:02:20.251 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:20.251 ++ HOME_URL=https://fedoraproject.org/ 00:02:20.251 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:20.251 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:20.251 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:20.251 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:20.251 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:20.251 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:20.251 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:20.251 ++ SUPPORT_END=2024-11-12 00:02:20.251 ++ VARIANT='Cloud Edition' 00:02:20.251 ++ VARIANT_ID=cloud 00:02:20.251 + uname -a 00:02:20.251 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:20.251 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:20.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:20.819 Hugepages 00:02:20.819 node hugesize free / total 00:02:20.819 node0 1048576kB 0 / 0 00:02:20.819 node0 2048kB 0 / 0 00:02:20.819 00:02:20.819 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:20.819 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:20.819 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:20.819 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:20.819 + rm -f /tmp/spdk-ld-path 00:02:20.819 + source autorun-spdk.conf 00:02:20.819 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.820 ++ SPDK_RUN_ASAN=1 00:02:20.820 ++ SPDK_RUN_UBSAN=1 00:02:20.820 ++ SPDK_TEST_RAID=1 00:02:20.820 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.820 ++ RUN_NIGHTLY=1 00:02:20.820 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.820 + [[ -n '' ]] 00:02:20.820 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:21.080 + for M in /var/spdk/build-*-manifest.txt 00:02:21.080 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:21.080 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.080 + for M in /var/spdk/build-*-manifest.txt 00:02:21.080 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:21.080 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.080 + for M in /var/spdk/build-*-manifest.txt 00:02:21.080 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:21.080 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.080 ++ uname 00:02:21.080 + [[ Linux == \L\i\n\u\x ]] 00:02:21.080 + sudo dmesg -T 00:02:21.080 + sudo dmesg --clear 00:02:21.080 + dmesg_pid=5419 00:02:21.080 + sudo dmesg -Tw 00:02:21.080 + [[ Fedora Linux == FreeBSD ]] 00:02:21.080 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.080 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.080 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:21.080 + [[ -x /usr/src/fio-static/fio ]] 00:02:21.080 + export FIO_BIN=/usr/src/fio-static/fio 00:02:21.080 + FIO_BIN=/usr/src/fio-static/fio 00:02:21.080 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:21.080 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:21.080 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:21.080 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:21.080 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:21.080 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:21.080 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:21.080 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:21.080 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.080 07:34:10 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:21.080 07:34:10 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.080 07:34:10 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.080 07:34:10 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:21.080 07:34:11 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:21.080 07:34:11 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:21.080 07:34:11 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.080 07:34:11 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:02:21.080 07:34:11 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:21.080 07:34:11 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.341 07:34:11 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:21.341 07:34:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:21.341 07:34:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:21.341 07:34:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:21.341 07:34:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:21.341 07:34:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:21.341 07:34:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.341 07:34:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.341 07:34:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.341 07:34:11 -- paths/export.sh@5 -- $ export PATH 00:02:21.341 07:34:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.341 07:34:11 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:21.341 07:34:11 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:21.341 07:34:11 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732865651.XXXXXX 00:02:21.341 07:34:11 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732865651.M08whz 00:02:21.341 07:34:11 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:21.341 07:34:11 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:21.341 07:34:11 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:21.341 07:34:11 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:21.341 07:34:11 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:21.341 07:34:11 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:21.341 07:34:11 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:21.341 07:34:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.341 07:34:11 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:21.341 07:34:11 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:21.341 07:34:11 -- pm/common@17 -- $ local monitor 00:02:21.341 07:34:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.341 07:34:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.341 07:34:11 -- pm/common@25 -- $ sleep 1 00:02:21.341 07:34:11 -- pm/common@21 -- $ date +%s 00:02:21.341 07:34:11 -- pm/common@21 -- $ date +%s 00:02:21.341 07:34:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732865651 00:02:21.341 07:34:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732865651 00:02:21.341 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732865651_collect-cpu-load.pm.log 00:02:21.341 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732865651_collect-vmstat.pm.log 00:02:22.309 07:34:12 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:22.309 07:34:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:22.309 07:34:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:22.309 07:34:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:22.309 07:34:12 -- spdk/autobuild.sh@16 -- $ date -u 00:02:22.309 Fri Nov 29 07:34:12 AM UTC 2024 00:02:22.309 07:34:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:22.309 v25.01-pre-276-g35cd3e84d 00:02:22.309 07:34:12 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:22.309 07:34:12 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:22.309 07:34:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:22.309 07:34:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:22.309 07:34:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.309 ************************************ 00:02:22.309 START TEST asan 00:02:22.309 ************************************ 00:02:22.309 using asan 00:02:22.309 07:34:12 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:22.309 00:02:22.309 real 0m0.001s 00:02:22.309 user 0m0.000s 00:02:22.309 sys 0m0.000s 00:02:22.310 07:34:12 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:22.310 07:34:12 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:22.310 ************************************ 00:02:22.310 END TEST asan 00:02:22.310 ************************************ 00:02:22.586 07:34:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:22.586 07:34:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:22.586 07:34:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:22.586 07:34:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:22.586 07:34:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.586 ************************************ 00:02:22.586 START TEST ubsan 00:02:22.586 ************************************ 00:02:22.586 using ubsan 00:02:22.586 07:34:12 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:22.586 00:02:22.586 real 0m0.000s 00:02:22.586 user 0m0.000s 00:02:22.586 sys 0m0.000s 00:02:22.586 07:34:12 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:22.586 07:34:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:22.586 ************************************ 00:02:22.586 END TEST ubsan 00:02:22.586 ************************************ 00:02:22.586 07:34:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:22.586 07:34:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:22.586 07:34:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:22.586 07:34:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:22.586 07:34:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:22.586 07:34:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:22.586 07:34:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:22.586 07:34:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:22.586 07:34:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:22.587 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:22.587 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:23.155 Using 'verbs' RDMA provider 00:02:38.985 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:57.083 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:57.083 Creating mk/config.mk...done. 00:02:57.083 Creating mk/cc.flags.mk...done. 00:02:57.083 Type 'make' to build. 00:02:57.083 07:34:44 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:57.083 07:34:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:57.083 07:34:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.083 07:34:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.083 ************************************ 00:02:57.083 START TEST make 00:02:57.083 ************************************ 00:02:57.083 07:34:44 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:57.083 make[1]: Nothing to be done for 'all'. 00:03:05.211 The Meson build system 00:03:05.211 Version: 1.5.0 00:03:05.211 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:05.211 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:05.211 Build type: native build 00:03:05.211 Program cat found: YES (/usr/bin/cat) 00:03:05.211 Project name: DPDK 00:03:05.211 Project version: 24.03.0 00:03:05.211 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:05.211 C linker for the host machine: cc ld.bfd 2.40-14 00:03:05.211 Host machine cpu family: x86_64 00:03:05.211 Host machine cpu: x86_64 00:03:05.211 Message: ## Building in Developer Mode ## 00:03:05.211 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:05.211 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:05.211 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:05.211 Program python3 found: YES (/usr/bin/python3) 00:03:05.211 Program cat found: YES (/usr/bin/cat) 00:03:05.211 Compiler for C supports arguments -march=native: YES 00:03:05.211 Checking for size of "void *" : 8 00:03:05.211 Checking for size of "void *" : 8 (cached) 00:03:05.211 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:05.211 Library m found: YES 00:03:05.211 Library numa found: YES 00:03:05.211 Has header "numaif.h" : YES 00:03:05.211 Library fdt found: NO 00:03:05.211 Library execinfo found: NO 00:03:05.211 Has header "execinfo.h" : YES 00:03:05.211 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:05.211 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:05.211 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:05.211 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:05.211 Run-time dependency openssl found: YES 3.1.1 00:03:05.211 Run-time dependency libpcap found: YES 1.10.4 00:03:05.211 Has header "pcap.h" with dependency libpcap: YES 00:03:05.211 Compiler for C supports arguments -Wcast-qual: YES 00:03:05.211 Compiler for C supports arguments -Wdeprecated: YES 00:03:05.211 Compiler for C supports arguments -Wformat: YES 00:03:05.211 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:05.211 Compiler for C supports arguments -Wformat-security: NO 00:03:05.211 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:05.211 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:05.211 Compiler for C supports arguments -Wnested-externs: YES 00:03:05.211 Compiler for C supports arguments -Wold-style-definition: YES 00:03:05.211 Compiler for C supports arguments -Wpointer-arith: YES 00:03:05.211 Compiler for C supports arguments -Wsign-compare: YES 00:03:05.211 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:05.211 Compiler for C supports arguments -Wundef: YES 00:03:05.211 Compiler for C supports arguments -Wwrite-strings: YES 00:03:05.211 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:05.211 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:05.211 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:05.211 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:05.211 Program objdump found: YES (/usr/bin/objdump) 00:03:05.211 Compiler for C supports arguments -mavx512f: YES 00:03:05.211 Checking if "AVX512 checking" compiles: YES 00:03:05.211 Fetching value of define "__SSE4_2__" : 1 00:03:05.211 Fetching value of define "__AES__" : 1 00:03:05.211 Fetching value of define "__AVX__" : 1 00:03:05.211 Fetching value of define "__AVX2__" : 1 00:03:05.211 Fetching value of define "__AVX512BW__" : 1 00:03:05.211 Fetching value of define "__AVX512CD__" : 1 00:03:05.211 Fetching value of define "__AVX512DQ__" : 1 00:03:05.211 Fetching value of define "__AVX512F__" : 1 00:03:05.211 Fetching value of define "__AVX512VL__" : 1 00:03:05.211 Fetching value of define "__PCLMUL__" : 1 00:03:05.211 Fetching value of define "__RDRND__" : 1 00:03:05.211 Fetching value of define "__RDSEED__" : 1 00:03:05.211 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:05.211 Fetching value of define "__znver1__" : (undefined) 00:03:05.211 Fetching value of define "__znver2__" : (undefined) 00:03:05.211 Fetching value of define "__znver3__" : (undefined) 00:03:05.211 Fetching value of define "__znver4__" : (undefined) 00:03:05.211 Library asan found: YES 00:03:05.211 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:05.211 Message: lib/log: Defining dependency "log" 00:03:05.211 Message: lib/kvargs: Defining dependency "kvargs" 00:03:05.211 Message: lib/telemetry: Defining dependency "telemetry" 00:03:05.211 Library rt found: YES 00:03:05.211 Checking for function "getentropy" : NO 00:03:05.211 Message: lib/eal: Defining dependency "eal" 00:03:05.211 Message: lib/ring: Defining dependency "ring" 00:03:05.211 Message: lib/rcu: Defining dependency "rcu" 00:03:05.211 Message: lib/mempool: Defining dependency "mempool" 00:03:05.211 Message: lib/mbuf: Defining dependency "mbuf" 00:03:05.211 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:05.211 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:05.211 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:05.211 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:05.211 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:05.211 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:05.211 Compiler for C supports arguments -mpclmul: YES 00:03:05.211 Compiler for C supports arguments -maes: YES 00:03:05.211 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:05.211 Compiler for C supports arguments -mavx512bw: YES 00:03:05.211 Compiler for C supports arguments -mavx512dq: YES 00:03:05.211 Compiler for C supports arguments -mavx512vl: YES 00:03:05.211 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:05.211 Compiler for C supports arguments -mavx2: YES 00:03:05.211 Compiler for C supports arguments -mavx: YES 00:03:05.211 Message: lib/net: Defining dependency "net" 00:03:05.211 Message: lib/meter: Defining dependency "meter" 00:03:05.211 Message: lib/ethdev: Defining dependency "ethdev" 00:03:05.212 Message: lib/pci: Defining dependency "pci" 00:03:05.212 Message: lib/cmdline: Defining dependency "cmdline" 00:03:05.212 Message: lib/hash: Defining dependency "hash" 00:03:05.212 Message: lib/timer: Defining dependency "timer" 00:03:05.212 Message: lib/compressdev: Defining dependency "compressdev" 00:03:05.212 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:05.212 Message: lib/dmadev: Defining dependency "dmadev" 00:03:05.212 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:05.212 Message: lib/power: Defining dependency "power" 00:03:05.212 Message: lib/reorder: Defining dependency "reorder" 00:03:05.212 Message: lib/security: Defining dependency "security" 00:03:05.212 Has header "linux/userfaultfd.h" : YES 00:03:05.212 Has header "linux/vduse.h" : YES 00:03:05.212 Message: lib/vhost: Defining dependency "vhost" 00:03:05.212 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:05.212 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:05.212 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:05.212 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:05.212 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:05.212 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:05.212 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:05.212 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:05.212 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:05.212 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:05.212 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:05.212 Configuring doxy-api-html.conf using configuration 00:03:05.212 Configuring doxy-api-man.conf using configuration 00:03:05.212 Program mandb found: YES (/usr/bin/mandb) 00:03:05.212 Program sphinx-build found: NO 00:03:05.212 Configuring rte_build_config.h using configuration 00:03:05.212 Message: 00:03:05.212 ================= 00:03:05.212 Applications Enabled 00:03:05.212 ================= 00:03:05.212 00:03:05.212 apps: 00:03:05.212 00:03:05.212 00:03:05.212 Message: 00:03:05.212 ================= 00:03:05.212 Libraries Enabled 00:03:05.212 ================= 00:03:05.212 00:03:05.212 libs: 00:03:05.212 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:05.212 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:05.212 cryptodev, dmadev, power, reorder, security, vhost, 00:03:05.212 00:03:05.212 Message: 00:03:05.212 =============== 00:03:05.212 Drivers Enabled 00:03:05.212 =============== 00:03:05.212 00:03:05.212 common: 00:03:05.212 00:03:05.212 bus: 00:03:05.212 pci, vdev, 00:03:05.212 mempool: 00:03:05.212 ring, 00:03:05.212 dma: 00:03:05.212 00:03:05.212 net: 00:03:05.212 00:03:05.212 crypto: 00:03:05.212 00:03:05.212 compress: 00:03:05.212 00:03:05.212 vdpa: 00:03:05.212 00:03:05.212 00:03:05.212 Message: 00:03:05.212 ================= 00:03:05.212 Content Skipped 00:03:05.212 ================= 00:03:05.212 00:03:05.212 apps: 00:03:05.212 dumpcap: explicitly disabled via build config 00:03:05.212 graph: explicitly disabled via build config 00:03:05.212 pdump: explicitly disabled via build config 00:03:05.212 proc-info: explicitly disabled via build config 00:03:05.212 test-acl: explicitly disabled via build config 00:03:05.212 test-bbdev: explicitly disabled via build config 00:03:05.212 test-cmdline: explicitly disabled via build config 00:03:05.212 test-compress-perf: explicitly disabled via build config 00:03:05.212 test-crypto-perf: explicitly disabled via build config 00:03:05.212 test-dma-perf: explicitly disabled via build config 00:03:05.212 test-eventdev: explicitly disabled via build config 00:03:05.212 test-fib: explicitly disabled via build config 00:03:05.212 test-flow-perf: explicitly disabled via build config 00:03:05.212 test-gpudev: explicitly disabled via build config 00:03:05.212 test-mldev: explicitly disabled via build config 00:03:05.212 test-pipeline: explicitly disabled via build config 00:03:05.212 test-pmd: explicitly disabled via build config 00:03:05.212 test-regex: explicitly disabled via build config 00:03:05.212 test-sad: explicitly disabled via build config 00:03:05.212 test-security-perf: explicitly disabled via build config 00:03:05.212 00:03:05.212 libs: 00:03:05.212 argparse: explicitly disabled via build config 00:03:05.212 metrics: explicitly disabled via build config 00:03:05.212 acl: explicitly disabled via build config 00:03:05.212 bbdev: explicitly disabled via build config 00:03:05.212 bitratestats: explicitly disabled via build config 00:03:05.212 bpf: explicitly disabled via build config 00:03:05.212 cfgfile: explicitly disabled via build config 00:03:05.212 distributor: explicitly disabled via build config 00:03:05.212 efd: explicitly disabled via build config 00:03:05.212 eventdev: explicitly disabled via build config 00:03:05.212 dispatcher: explicitly disabled via build config 00:03:05.212 gpudev: explicitly disabled via build config 00:03:05.212 gro: explicitly disabled via build config 00:03:05.212 gso: explicitly disabled via build config 00:03:05.212 ip_frag: explicitly disabled via build config 00:03:05.212 jobstats: explicitly disabled via build config 00:03:05.212 latencystats: explicitly disabled via build config 00:03:05.212 lpm: explicitly disabled via build config 00:03:05.212 member: explicitly disabled via build config 00:03:05.212 pcapng: explicitly disabled via build config 00:03:05.212 rawdev: explicitly disabled via build config 00:03:05.212 regexdev: explicitly disabled via build config 00:03:05.212 mldev: explicitly disabled via build config 00:03:05.212 rib: explicitly disabled via build config 00:03:05.212 sched: explicitly disabled via build config 00:03:05.212 stack: explicitly disabled via build config 00:03:05.212 ipsec: explicitly disabled via build config 00:03:05.212 pdcp: explicitly disabled via build config 00:03:05.212 fib: explicitly disabled via build config 00:03:05.212 port: explicitly disabled via build config 00:03:05.212 pdump: explicitly disabled via build config 00:03:05.212 table: explicitly disabled via build config 00:03:05.212 pipeline: explicitly disabled via build config 00:03:05.212 graph: explicitly disabled via build config 00:03:05.212 node: explicitly disabled via build config 00:03:05.212 00:03:05.212 drivers: 00:03:05.212 common/cpt: not in enabled drivers build config 00:03:05.212 common/dpaax: not in enabled drivers build config 00:03:05.212 common/iavf: not in enabled drivers build config 00:03:05.212 common/idpf: not in enabled drivers build config 00:03:05.212 common/ionic: not in enabled drivers build config 00:03:05.212 common/mvep: not in enabled drivers build config 00:03:05.212 common/octeontx: not in enabled drivers build config 00:03:05.212 bus/auxiliary: not in enabled drivers build config 00:03:05.212 bus/cdx: not in enabled drivers build config 00:03:05.212 bus/dpaa: not in enabled drivers build config 00:03:05.212 bus/fslmc: not in enabled drivers build config 00:03:05.212 bus/ifpga: not in enabled drivers build config 00:03:05.212 bus/platform: not in enabled drivers build config 00:03:05.212 bus/uacce: not in enabled drivers build config 00:03:05.212 bus/vmbus: not in enabled drivers build config 00:03:05.212 common/cnxk: not in enabled drivers build config 00:03:05.212 common/mlx5: not in enabled drivers build config 00:03:05.212 common/nfp: not in enabled drivers build config 00:03:05.212 common/nitrox: not in enabled drivers build config 00:03:05.212 common/qat: not in enabled drivers build config 00:03:05.212 common/sfc_efx: not in enabled drivers build config 00:03:05.212 mempool/bucket: not in enabled drivers build config 00:03:05.212 mempool/cnxk: not in enabled drivers build config 00:03:05.212 mempool/dpaa: not in enabled drivers build config 00:03:05.212 mempool/dpaa2: not in enabled drivers build config 00:03:05.212 mempool/octeontx: not in enabled drivers build config 00:03:05.212 mempool/stack: not in enabled drivers build config 00:03:05.212 dma/cnxk: not in enabled drivers build config 00:03:05.212 dma/dpaa: not in enabled drivers build config 00:03:05.212 dma/dpaa2: not in enabled drivers build config 00:03:05.212 dma/hisilicon: not in enabled drivers build config 00:03:05.212 dma/idxd: not in enabled drivers build config 00:03:05.212 dma/ioat: not in enabled drivers build config 00:03:05.212 dma/skeleton: not in enabled drivers build config 00:03:05.212 net/af_packet: not in enabled drivers build config 00:03:05.212 net/af_xdp: not in enabled drivers build config 00:03:05.212 net/ark: not in enabled drivers build config 00:03:05.212 net/atlantic: not in enabled drivers build config 00:03:05.212 net/avp: not in enabled drivers build config 00:03:05.212 net/axgbe: not in enabled drivers build config 00:03:05.212 net/bnx2x: not in enabled drivers build config 00:03:05.212 net/bnxt: not in enabled drivers build config 00:03:05.212 net/bonding: not in enabled drivers build config 00:03:05.212 net/cnxk: not in enabled drivers build config 00:03:05.212 net/cpfl: not in enabled drivers build config 00:03:05.212 net/cxgbe: not in enabled drivers build config 00:03:05.212 net/dpaa: not in enabled drivers build config 00:03:05.212 net/dpaa2: not in enabled drivers build config 00:03:05.212 net/e1000: not in enabled drivers build config 00:03:05.212 net/ena: not in enabled drivers build config 00:03:05.212 net/enetc: not in enabled drivers build config 00:03:05.212 net/enetfec: not in enabled drivers build config 00:03:05.212 net/enic: not in enabled drivers build config 00:03:05.212 net/failsafe: not in enabled drivers build config 00:03:05.212 net/fm10k: not in enabled drivers build config 00:03:05.212 net/gve: not in enabled drivers build config 00:03:05.212 net/hinic: not in enabled drivers build config 00:03:05.212 net/hns3: not in enabled drivers build config 00:03:05.212 net/i40e: not in enabled drivers build config 00:03:05.212 net/iavf: not in enabled drivers build config 00:03:05.212 net/ice: not in enabled drivers build config 00:03:05.212 net/idpf: not in enabled drivers build config 00:03:05.213 net/igc: not in enabled drivers build config 00:03:05.213 net/ionic: not in enabled drivers build config 00:03:05.213 net/ipn3ke: not in enabled drivers build config 00:03:05.213 net/ixgbe: not in enabled drivers build config 00:03:05.213 net/mana: not in enabled drivers build config 00:03:05.213 net/memif: not in enabled drivers build config 00:03:05.213 net/mlx4: not in enabled drivers build config 00:03:05.213 net/mlx5: not in enabled drivers build config 00:03:05.213 net/mvneta: not in enabled drivers build config 00:03:05.213 net/mvpp2: not in enabled drivers build config 00:03:05.213 net/netvsc: not in enabled drivers build config 00:03:05.213 net/nfb: not in enabled drivers build config 00:03:05.213 net/nfp: not in enabled drivers build config 00:03:05.213 net/ngbe: not in enabled drivers build config 00:03:05.213 net/null: not in enabled drivers build config 00:03:05.213 net/octeontx: not in enabled drivers build config 00:03:05.213 net/octeon_ep: not in enabled drivers build config 00:03:05.213 net/pcap: not in enabled drivers build config 00:03:05.213 net/pfe: not in enabled drivers build config 00:03:05.213 net/qede: not in enabled drivers build config 00:03:05.213 net/ring: not in enabled drivers build config 00:03:05.213 net/sfc: not in enabled drivers build config 00:03:05.213 net/softnic: not in enabled drivers build config 00:03:05.213 net/tap: not in enabled drivers build config 00:03:05.213 net/thunderx: not in enabled drivers build config 00:03:05.213 net/txgbe: not in enabled drivers build config 00:03:05.213 net/vdev_netvsc: not in enabled drivers build config 00:03:05.213 net/vhost: not in enabled drivers build config 00:03:05.213 net/virtio: not in enabled drivers build config 00:03:05.213 net/vmxnet3: not in enabled drivers build config 00:03:05.213 raw/*: missing internal dependency, "rawdev" 00:03:05.213 crypto/armv8: not in enabled drivers build config 00:03:05.213 crypto/bcmfs: not in enabled drivers build config 00:03:05.213 crypto/caam_jr: not in enabled drivers build config 00:03:05.213 crypto/ccp: not in enabled drivers build config 00:03:05.213 crypto/cnxk: not in enabled drivers build config 00:03:05.213 crypto/dpaa_sec: not in enabled drivers build config 00:03:05.213 crypto/dpaa2_sec: not in enabled drivers build config 00:03:05.213 crypto/ipsec_mb: not in enabled drivers build config 00:03:05.213 crypto/mlx5: not in enabled drivers build config 00:03:05.213 crypto/mvsam: not in enabled drivers build config 00:03:05.213 crypto/nitrox: not in enabled drivers build config 00:03:05.213 crypto/null: not in enabled drivers build config 00:03:05.213 crypto/octeontx: not in enabled drivers build config 00:03:05.213 crypto/openssl: not in enabled drivers build config 00:03:05.213 crypto/scheduler: not in enabled drivers build config 00:03:05.213 crypto/uadk: not in enabled drivers build config 00:03:05.213 crypto/virtio: not in enabled drivers build config 00:03:05.213 compress/isal: not in enabled drivers build config 00:03:05.213 compress/mlx5: not in enabled drivers build config 00:03:05.213 compress/nitrox: not in enabled drivers build config 00:03:05.213 compress/octeontx: not in enabled drivers build config 00:03:05.213 compress/zlib: not in enabled drivers build config 00:03:05.213 regex/*: missing internal dependency, "regexdev" 00:03:05.213 ml/*: missing internal dependency, "mldev" 00:03:05.213 vdpa/ifc: not in enabled drivers build config 00:03:05.213 vdpa/mlx5: not in enabled drivers build config 00:03:05.213 vdpa/nfp: not in enabled drivers build config 00:03:05.213 vdpa/sfc: not in enabled drivers build config 00:03:05.213 event/*: missing internal dependency, "eventdev" 00:03:05.213 baseband/*: missing internal dependency, "bbdev" 00:03:05.213 gpu/*: missing internal dependency, "gpudev" 00:03:05.213 00:03:05.213 00:03:05.213 Build targets in project: 85 00:03:05.213 00:03:05.213 DPDK 24.03.0 00:03:05.213 00:03:05.213 User defined options 00:03:05.213 buildtype : debug 00:03:05.213 default_library : shared 00:03:05.213 libdir : lib 00:03:05.213 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:05.213 b_sanitize : address 00:03:05.213 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:05.213 c_link_args : 00:03:05.213 cpu_instruction_set: native 00:03:05.213 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:05.213 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:05.213 enable_docs : false 00:03:05.213 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:05.213 enable_kmods : false 00:03:05.213 max_lcores : 128 00:03:05.213 tests : false 00:03:05.213 00:03:05.213 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:05.471 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:05.471 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:05.471 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:05.471 [3/268] Linking static target lib/librte_kvargs.a 00:03:05.729 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:05.729 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:05.729 [6/268] Linking static target lib/librte_log.a 00:03:05.988 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.988 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:05.988 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:05.988 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:05.988 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:05.988 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:05.988 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:05.988 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:05.988 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:06.247 [16/268] Linking static target lib/librte_telemetry.a 00:03:06.247 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:06.247 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:06.508 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.508 [20/268] Linking target lib/librte_log.so.24.1 00:03:06.508 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:06.768 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:06.768 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:06.768 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:06.768 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:06.768 [26/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:06.768 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:06.768 [28/268] Linking target lib/librte_kvargs.so.24.1 00:03:06.769 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:06.769 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:07.028 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.028 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:07.028 [33/268] Linking target lib/librte_telemetry.so.24.1 00:03:07.028 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:07.028 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:07.288 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:07.288 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:07.288 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:07.288 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:07.288 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:07.288 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:07.288 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:07.288 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:07.288 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:07.547 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:07.548 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:07.548 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:07.807 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:07.807 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:07.807 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:07.807 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:07.807 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:08.067 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:08.067 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:08.067 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:08.067 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:08.067 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:08.067 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:08.327 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:08.327 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:08.327 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:08.327 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:08.327 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:08.588 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.588 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:08.588 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.588 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.588 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.848 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:08.848 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.848 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:09.107 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:09.107 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:09.107 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:09.107 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:09.107 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:09.107 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:09.107 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:09.367 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:09.367 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:09.367 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:09.367 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:09.627 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:09.627 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.627 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.627 [86/268] Linking static target lib/librte_ring.a 00:03:09.627 [87/268] Linking static target lib/librte_eal.a 00:03:09.888 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:09.888 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:09.888 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.888 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:09.888 [92/268] Linking static target lib/librte_mempool.a 00:03:10.148 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:10.148 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:10.148 [95/268] Linking static target lib/librte_rcu.a 00:03:10.148 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.148 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:10.148 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:10.408 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:10.408 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:10.408 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:10.408 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:10.408 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.668 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.668 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:10.668 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:10.668 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:10.668 [108/268] Linking static target lib/librte_net.a 00:03:10.668 [109/268] Linking static target lib/librte_mbuf.a 00:03:10.668 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:10.668 [111/268] Linking static target lib/librte_meter.a 00:03:10.927 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:10.927 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:10.927 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.187 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:11.187 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:11.187 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.187 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.447 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:11.447 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:11.707 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.707 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:11.707 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:11.707 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:11.967 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:11.967 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:11.967 [127/268] Linking static target lib/librte_pci.a 00:03:11.967 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:11.967 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:12.301 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:12.301 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:12.301 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:12.301 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.301 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:12.301 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:12.301 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:12.301 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:12.301 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:12.301 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:12.577 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:12.577 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:12.577 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:12.577 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:12.577 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:12.577 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:12.577 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:12.577 [147/268] Linking static target lib/librte_cmdline.a 00:03:12.836 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:12.836 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:12.837 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:13.097 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:13.097 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:13.097 [153/268] Linking static target lib/librte_timer.a 00:03:13.097 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:13.097 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:13.357 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:13.357 [157/268] Linking static target lib/librte_compressdev.a 00:03:13.617 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:13.617 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:13.617 [160/268] Linking static target lib/librte_hash.a 00:03:13.617 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.617 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:13.617 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:13.617 [164/268] Linking static target lib/librte_ethdev.a 00:03:13.617 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:13.617 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:13.878 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:13.878 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:13.878 [169/268] Linking static target lib/librte_dmadev.a 00:03:13.878 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:14.141 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:14.141 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:14.141 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.141 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.141 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:14.401 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:14.401 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.661 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:14.661 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.661 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:14.661 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:14.661 [182/268] Linking static target lib/librte_cryptodev.a 00:03:14.661 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:14.661 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:14.661 [185/268] Linking static target lib/librte_power.a 00:03:14.661 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:14.920 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:15.179 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:15.179 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:15.179 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:15.179 [191/268] Linking static target lib/librte_reorder.a 00:03:15.179 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:15.179 [193/268] Linking static target lib/librte_security.a 00:03:15.746 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.746 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:15.746 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.005 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.005 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:16.265 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:16.265 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:16.265 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:16.265 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:16.524 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:16.524 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:16.524 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:16.783 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:16.783 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:16.783 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:16.783 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:16.783 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:17.043 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.043 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:17.043 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:17.043 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:17.043 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:17.043 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:17.043 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:17.043 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:17.301 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:17.301 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:17.301 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:17.301 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:17.301 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.301 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:17.301 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:17.301 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:17.560 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.498 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:19.876 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.876 [230/268] Linking target lib/librte_eal.so.24.1 00:03:19.876 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:19.876 [232/268] Linking target lib/librte_dmadev.so.24.1 00:03:19.876 [233/268] Linking target lib/librte_ring.so.24.1 00:03:19.876 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:19.876 [235/268] Linking target lib/librte_meter.so.24.1 00:03:19.876 [236/268] Linking target lib/librte_pci.so.24.1 00:03:19.876 [237/268] Linking target lib/librte_timer.so.24.1 00:03:20.134 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:20.134 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:20.134 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:20.134 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:20.134 [242/268] Linking target lib/librte_rcu.so.24.1 00:03:20.134 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:20.134 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:20.134 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:20.134 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:20.134 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:20.392 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:20.392 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:20.392 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:20.392 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:20.392 [252/268] Linking target lib/librte_compressdev.so.24.1 00:03:20.392 [253/268] Linking target lib/librte_net.so.24.1 00:03:20.392 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:20.650 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:20.650 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:20.650 [257/268] Linking target lib/librte_security.so.24.1 00:03:20.650 [258/268] Linking target lib/librte_hash.so.24.1 00:03:20.651 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:20.908 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:21.844 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:22.103 [262/268] Linking static target lib/librte_vhost.a 00:03:22.362 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.362 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:22.622 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:22.622 [266/268] Linking target lib/librte_power.so.24.1 00:03:24.527 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.527 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:24.527 INFO: autodetecting backend as ninja 00:03:24.527 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:42.606 CC lib/ut_mock/mock.o 00:03:42.606 CC lib/log/log.o 00:03:42.606 CC lib/log/log_flags.o 00:03:42.606 CC lib/log/log_deprecated.o 00:03:42.606 CC lib/ut/ut.o 00:03:42.606 LIB libspdk_ut_mock.a 00:03:42.606 LIB libspdk_ut.a 00:03:42.606 LIB libspdk_log.a 00:03:42.606 SO libspdk_ut_mock.so.6.0 00:03:42.606 SO libspdk_ut.so.2.0 00:03:42.606 SO libspdk_log.so.7.1 00:03:42.606 SYMLINK libspdk_ut_mock.so 00:03:42.606 SYMLINK libspdk_ut.so 00:03:42.606 SYMLINK libspdk_log.so 00:03:42.606 CC lib/ioat/ioat.o 00:03:42.606 CXX lib/trace_parser/trace.o 00:03:42.606 CC lib/dma/dma.o 00:03:42.606 CC lib/util/base64.o 00:03:42.606 CC lib/util/bit_array.o 00:03:42.606 CC lib/util/cpuset.o 00:03:42.606 CC lib/util/crc16.o 00:03:42.606 CC lib/util/crc32.o 00:03:42.606 CC lib/util/crc32c.o 00:03:42.606 CC lib/vfio_user/host/vfio_user_pci.o 00:03:42.606 CC lib/vfio_user/host/vfio_user.o 00:03:42.606 CC lib/util/crc32_ieee.o 00:03:42.606 CC lib/util/crc64.o 00:03:42.606 CC lib/util/dif.o 00:03:42.606 LIB libspdk_dma.a 00:03:42.606 SO libspdk_dma.so.5.0 00:03:42.606 CC lib/util/fd.o 00:03:42.606 SYMLINK libspdk_dma.so 00:03:42.606 CC lib/util/fd_group.o 00:03:42.606 CC lib/util/file.o 00:03:42.606 CC lib/util/hexlify.o 00:03:42.606 CC lib/util/iov.o 00:03:42.606 LIB libspdk_ioat.a 00:03:42.606 CC lib/util/math.o 00:03:42.606 SO libspdk_ioat.so.7.0 00:03:42.606 LIB libspdk_vfio_user.a 00:03:42.606 CC lib/util/net.o 00:03:42.606 SO libspdk_vfio_user.so.5.0 00:03:42.606 SYMLINK libspdk_ioat.so 00:03:42.606 CC lib/util/pipe.o 00:03:42.607 CC lib/util/strerror_tls.o 00:03:42.607 CC lib/util/string.o 00:03:42.607 CC lib/util/uuid.o 00:03:42.607 SYMLINK libspdk_vfio_user.so 00:03:42.607 CC lib/util/xor.o 00:03:42.607 CC lib/util/zipf.o 00:03:42.607 CC lib/util/md5.o 00:03:42.865 LIB libspdk_util.a 00:03:43.124 SO libspdk_util.so.10.1 00:03:43.124 LIB libspdk_trace_parser.a 00:03:43.124 SO libspdk_trace_parser.so.6.0 00:03:43.124 SYMLINK libspdk_util.so 00:03:43.383 SYMLINK libspdk_trace_parser.so 00:03:43.383 CC lib/vmd/vmd.o 00:03:43.383 CC lib/vmd/led.o 00:03:43.383 CC lib/json/json_util.o 00:03:43.383 CC lib/json/json_write.o 00:03:43.383 CC lib/json/json_parse.o 00:03:43.383 CC lib/env_dpdk/env.o 00:03:43.383 CC lib/conf/conf.o 00:03:43.383 CC lib/env_dpdk/memory.o 00:03:43.383 CC lib/rdma_utils/rdma_utils.o 00:03:43.383 CC lib/idxd/idxd.o 00:03:43.641 CC lib/idxd/idxd_user.o 00:03:43.641 LIB libspdk_conf.a 00:03:43.641 CC lib/idxd/idxd_kernel.o 00:03:43.641 SO libspdk_conf.so.6.0 00:03:43.641 CC lib/env_dpdk/pci.o 00:03:43.641 LIB libspdk_rdma_utils.a 00:03:43.641 LIB libspdk_json.a 00:03:43.641 SYMLINK libspdk_conf.so 00:03:43.641 SO libspdk_rdma_utils.so.1.0 00:03:43.641 SO libspdk_json.so.6.0 00:03:43.641 CC lib/env_dpdk/init.o 00:03:43.899 SYMLINK libspdk_rdma_utils.so 00:03:43.899 CC lib/env_dpdk/threads.o 00:03:43.899 SYMLINK libspdk_json.so 00:03:43.899 CC lib/env_dpdk/pci_ioat.o 00:03:43.899 CC lib/env_dpdk/pci_virtio.o 00:03:43.899 CC lib/env_dpdk/pci_vmd.o 00:03:43.899 CC lib/env_dpdk/pci_idxd.o 00:03:43.899 CC lib/env_dpdk/pci_event.o 00:03:43.899 CC lib/env_dpdk/sigbus_handler.o 00:03:44.158 CC lib/env_dpdk/pci_dpdk.o 00:03:44.158 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:44.158 CC lib/rdma_provider/common.o 00:03:44.158 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:44.158 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:44.158 LIB libspdk_idxd.a 00:03:44.158 LIB libspdk_vmd.a 00:03:44.158 SO libspdk_idxd.so.12.1 00:03:44.159 SO libspdk_vmd.so.6.0 00:03:44.159 CC lib/jsonrpc/jsonrpc_server.o 00:03:44.159 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:44.159 CC lib/jsonrpc/jsonrpc_client.o 00:03:44.159 SYMLINK libspdk_idxd.so 00:03:44.159 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:44.418 SYMLINK libspdk_vmd.so 00:03:44.418 LIB libspdk_rdma_provider.a 00:03:44.418 SO libspdk_rdma_provider.so.7.0 00:03:44.418 SYMLINK libspdk_rdma_provider.so 00:03:44.681 LIB libspdk_jsonrpc.a 00:03:44.681 SO libspdk_jsonrpc.so.6.0 00:03:44.681 SYMLINK libspdk_jsonrpc.so 00:03:45.250 CC lib/rpc/rpc.o 00:03:45.250 LIB libspdk_env_dpdk.a 00:03:45.250 SO libspdk_env_dpdk.so.15.1 00:03:45.250 LIB libspdk_rpc.a 00:03:45.250 SO libspdk_rpc.so.6.0 00:03:45.510 SYMLINK libspdk_env_dpdk.so 00:03:45.510 SYMLINK libspdk_rpc.so 00:03:45.770 CC lib/trace/trace.o 00:03:45.770 CC lib/trace/trace_flags.o 00:03:45.770 CC lib/trace/trace_rpc.o 00:03:45.770 CC lib/notify/notify.o 00:03:45.770 CC lib/keyring/keyring.o 00:03:45.770 CC lib/notify/notify_rpc.o 00:03:45.770 CC lib/keyring/keyring_rpc.o 00:03:46.031 LIB libspdk_notify.a 00:03:46.031 SO libspdk_notify.so.6.0 00:03:46.031 LIB libspdk_keyring.a 00:03:46.031 LIB libspdk_trace.a 00:03:46.031 SYMLINK libspdk_notify.so 00:03:46.031 SO libspdk_trace.so.11.0 00:03:46.031 SO libspdk_keyring.so.2.0 00:03:46.031 SYMLINK libspdk_trace.so 00:03:46.031 SYMLINK libspdk_keyring.so 00:03:46.600 CC lib/sock/sock.o 00:03:46.600 CC lib/sock/sock_rpc.o 00:03:46.600 CC lib/thread/iobuf.o 00:03:46.600 CC lib/thread/thread.o 00:03:46.860 LIB libspdk_sock.a 00:03:46.860 SO libspdk_sock.so.10.0 00:03:47.120 SYMLINK libspdk_sock.so 00:03:47.380 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:47.380 CC lib/nvme/nvme_ctrlr.o 00:03:47.380 CC lib/nvme/nvme_fabric.o 00:03:47.380 CC lib/nvme/nvme_ns_cmd.o 00:03:47.380 CC lib/nvme/nvme_ns.o 00:03:47.380 CC lib/nvme/nvme_pcie.o 00:03:47.380 CC lib/nvme/nvme_pcie_common.o 00:03:47.380 CC lib/nvme/nvme_qpair.o 00:03:47.380 CC lib/nvme/nvme.o 00:03:48.318 CC lib/nvme/nvme_quirks.o 00:03:48.318 CC lib/nvme/nvme_transport.o 00:03:48.318 LIB libspdk_thread.a 00:03:48.318 CC lib/nvme/nvme_discovery.o 00:03:48.318 SO libspdk_thread.so.11.0 00:03:48.318 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:48.318 SYMLINK libspdk_thread.so 00:03:48.318 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:48.318 CC lib/nvme/nvme_tcp.o 00:03:48.577 CC lib/nvme/nvme_opal.o 00:03:48.577 CC lib/accel/accel.o 00:03:48.577 CC lib/nvme/nvme_io_msg.o 00:03:48.577 CC lib/nvme/nvme_poll_group.o 00:03:48.835 CC lib/nvme/nvme_zns.o 00:03:48.835 CC lib/nvme/nvme_stubs.o 00:03:48.835 CC lib/nvme/nvme_auth.o 00:03:48.836 CC lib/accel/accel_rpc.o 00:03:49.094 CC lib/accel/accel_sw.o 00:03:49.353 CC lib/nvme/nvme_cuse.o 00:03:49.353 CC lib/nvme/nvme_rdma.o 00:03:49.353 CC lib/blob/blobstore.o 00:03:49.353 CC lib/blob/request.o 00:03:49.353 CC lib/blob/zeroes.o 00:03:49.353 CC lib/blob/blob_bs_dev.o 00:03:49.613 CC lib/init/json_config.o 00:03:49.613 CC lib/init/subsystem.o 00:03:49.872 CC lib/virtio/virtio.o 00:03:49.872 LIB libspdk_accel.a 00:03:49.872 SO libspdk_accel.so.16.0 00:03:49.873 CC lib/init/subsystem_rpc.o 00:03:49.873 CC lib/init/rpc.o 00:03:49.873 SYMLINK libspdk_accel.so 00:03:49.873 CC lib/virtio/virtio_vhost_user.o 00:03:50.132 CC lib/virtio/virtio_vfio_user.o 00:03:50.132 CC lib/fsdev/fsdev.o 00:03:50.132 LIB libspdk_init.a 00:03:50.132 CC lib/virtio/virtio_pci.o 00:03:50.132 CC lib/fsdev/fsdev_io.o 00:03:50.132 SO libspdk_init.so.6.0 00:03:50.132 CC lib/bdev/bdev.o 00:03:50.132 SYMLINK libspdk_init.so 00:03:50.132 CC lib/fsdev/fsdev_rpc.o 00:03:50.392 CC lib/bdev/bdev_rpc.o 00:03:50.392 CC lib/bdev/bdev_zone.o 00:03:50.392 CC lib/bdev/part.o 00:03:50.392 CC lib/event/app.o 00:03:50.392 LIB libspdk_virtio.a 00:03:50.392 SO libspdk_virtio.so.7.0 00:03:50.392 CC lib/event/reactor.o 00:03:50.392 CC lib/event/log_rpc.o 00:03:50.652 SYMLINK libspdk_virtio.so 00:03:50.652 CC lib/bdev/scsi_nvme.o 00:03:50.652 CC lib/event/app_rpc.o 00:03:50.652 CC lib/event/scheduler_static.o 00:03:50.912 LIB libspdk_fsdev.a 00:03:50.912 LIB libspdk_nvme.a 00:03:50.912 SO libspdk_fsdev.so.2.0 00:03:50.912 SYMLINK libspdk_fsdev.so 00:03:50.912 LIB libspdk_event.a 00:03:51.172 SO libspdk_event.so.14.0 00:03:51.172 SO libspdk_nvme.so.15.0 00:03:51.172 SYMLINK libspdk_event.so 00:03:51.172 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:51.432 SYMLINK libspdk_nvme.so 00:03:52.000 LIB libspdk_fuse_dispatcher.a 00:03:52.000 SO libspdk_fuse_dispatcher.so.1.0 00:03:52.000 SYMLINK libspdk_fuse_dispatcher.so 00:03:52.939 LIB libspdk_blob.a 00:03:53.200 SO libspdk_blob.so.12.0 00:03:53.200 SYMLINK libspdk_blob.so 00:03:53.200 LIB libspdk_bdev.a 00:03:53.458 SO libspdk_bdev.so.17.0 00:03:53.458 SYMLINK libspdk_bdev.so 00:03:53.458 CC lib/blobfs/blobfs.o 00:03:53.458 CC lib/blobfs/tree.o 00:03:53.458 CC lib/lvol/lvol.o 00:03:53.718 CC lib/scsi/dev.o 00:03:53.718 CC lib/scsi/lun.o 00:03:53.718 CC lib/scsi/port.o 00:03:53.718 CC lib/nbd/nbd.o 00:03:53.718 CC lib/nvmf/ctrlr.o 00:03:53.718 CC lib/ublk/ublk.o 00:03:53.718 CC lib/ftl/ftl_core.o 00:03:53.718 CC lib/nvmf/ctrlr_discovery.o 00:03:53.718 CC lib/nvmf/ctrlr_bdev.o 00:03:53.977 CC lib/ftl/ftl_init.o 00:03:53.977 CC lib/scsi/scsi.o 00:03:53.977 CC lib/scsi/scsi_bdev.o 00:03:53.977 CC lib/nbd/nbd_rpc.o 00:03:53.977 CC lib/ftl/ftl_layout.o 00:03:53.977 CC lib/ftl/ftl_debug.o 00:03:54.236 CC lib/nvmf/subsystem.o 00:03:54.236 LIB libspdk_nbd.a 00:03:54.236 SO libspdk_nbd.so.7.0 00:03:54.236 SYMLINK libspdk_nbd.so 00:03:54.236 CC lib/scsi/scsi_pr.o 00:03:54.236 CC lib/scsi/scsi_rpc.o 00:03:54.236 CC lib/ublk/ublk_rpc.o 00:03:54.495 CC lib/ftl/ftl_io.o 00:03:54.495 LIB libspdk_blobfs.a 00:03:54.495 SO libspdk_blobfs.so.11.0 00:03:54.495 CC lib/ftl/ftl_sb.o 00:03:54.495 LIB libspdk_ublk.a 00:03:54.495 SYMLINK libspdk_blobfs.so 00:03:54.495 CC lib/scsi/task.o 00:03:54.495 SO libspdk_ublk.so.3.0 00:03:54.495 CC lib/nvmf/nvmf.o 00:03:54.495 CC lib/ftl/ftl_l2p.o 00:03:54.755 SYMLINK libspdk_ublk.so 00:03:54.755 LIB libspdk_lvol.a 00:03:54.755 CC lib/nvmf/nvmf_rpc.o 00:03:54.755 SO libspdk_lvol.so.11.0 00:03:54.755 CC lib/nvmf/transport.o 00:03:54.755 CC lib/nvmf/tcp.o 00:03:54.755 CC lib/nvmf/stubs.o 00:03:54.755 SYMLINK libspdk_lvol.so 00:03:54.755 CC lib/nvmf/mdns_server.o 00:03:54.755 LIB libspdk_scsi.a 00:03:54.755 CC lib/ftl/ftl_l2p_flat.o 00:03:54.755 SO libspdk_scsi.so.9.0 00:03:55.014 SYMLINK libspdk_scsi.so 00:03:55.014 CC lib/nvmf/rdma.o 00:03:55.014 CC lib/ftl/ftl_nv_cache.o 00:03:55.274 CC lib/nvmf/auth.o 00:03:55.274 CC lib/ftl/ftl_band.o 00:03:55.533 CC lib/ftl/ftl_band_ops.o 00:03:55.533 CC lib/iscsi/conn.o 00:03:55.533 CC lib/ftl/ftl_writer.o 00:03:55.792 CC lib/ftl/ftl_rq.o 00:03:55.792 CC lib/vhost/vhost.o 00:03:55.792 CC lib/iscsi/init_grp.o 00:03:55.792 CC lib/ftl/ftl_reloc.o 00:03:55.792 CC lib/iscsi/iscsi.o 00:03:56.052 CC lib/iscsi/param.o 00:03:56.052 CC lib/iscsi/portal_grp.o 00:03:56.052 CC lib/vhost/vhost_rpc.o 00:03:56.052 CC lib/ftl/ftl_l2p_cache.o 00:03:56.311 CC lib/ftl/ftl_p2l.o 00:03:56.311 CC lib/ftl/ftl_p2l_log.o 00:03:56.311 CC lib/iscsi/tgt_node.o 00:03:56.311 CC lib/ftl/mngt/ftl_mngt.o 00:03:56.570 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:56.570 CC lib/vhost/vhost_scsi.o 00:03:56.570 CC lib/vhost/vhost_blk.o 00:03:56.570 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:56.570 CC lib/vhost/rte_vhost_user.o 00:03:56.829 CC lib/iscsi/iscsi_subsystem.o 00:03:56.829 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:56.829 CC lib/iscsi/iscsi_rpc.o 00:03:56.829 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:56.829 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:57.088 CC lib/iscsi/task.o 00:03:57.088 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:57.088 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:57.088 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:57.088 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:57.088 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:57.348 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:57.348 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:57.348 CC lib/ftl/utils/ftl_conf.o 00:03:57.348 CC lib/ftl/utils/ftl_md.o 00:03:57.607 LIB libspdk_nvmf.a 00:03:57.607 LIB libspdk_iscsi.a 00:03:57.607 CC lib/ftl/utils/ftl_mempool.o 00:03:57.607 CC lib/ftl/utils/ftl_bitmap.o 00:03:57.607 CC lib/ftl/utils/ftl_property.o 00:03:57.607 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:57.607 SO libspdk_iscsi.so.8.0 00:03:57.607 SO libspdk_nvmf.so.20.0 00:03:57.607 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:57.607 LIB libspdk_vhost.a 00:03:57.607 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:57.607 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:57.867 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:57.867 SYMLINK libspdk_iscsi.so 00:03:57.867 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:57.867 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:57.867 SO libspdk_vhost.so.8.0 00:03:57.867 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:57.867 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:57.867 SYMLINK libspdk_nvmf.so 00:03:57.867 SYMLINK libspdk_vhost.so 00:03:57.867 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:57.867 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:57.867 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:57.867 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:57.867 CC lib/ftl/base/ftl_base_dev.o 00:03:57.867 CC lib/ftl/base/ftl_base_bdev.o 00:03:57.867 CC lib/ftl/ftl_trace.o 00:03:58.127 LIB libspdk_ftl.a 00:03:58.388 SO libspdk_ftl.so.9.0 00:03:58.648 SYMLINK libspdk_ftl.so 00:03:58.908 CC module/env_dpdk/env_dpdk_rpc.o 00:03:59.168 CC module/keyring/linux/keyring.o 00:03:59.168 CC module/sock/posix/posix.o 00:03:59.168 CC module/blob/bdev/blob_bdev.o 00:03:59.168 CC module/scheduler/gscheduler/gscheduler.o 00:03:59.168 CC module/accel/error/accel_error.o 00:03:59.168 CC module/keyring/file/keyring.o 00:03:59.168 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:59.168 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:59.168 CC module/fsdev/aio/fsdev_aio.o 00:03:59.168 LIB libspdk_env_dpdk_rpc.a 00:03:59.168 SO libspdk_env_dpdk_rpc.so.6.0 00:03:59.168 SYMLINK libspdk_env_dpdk_rpc.so 00:03:59.168 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:59.168 CC module/keyring/linux/keyring_rpc.o 00:03:59.168 CC module/keyring/file/keyring_rpc.o 00:03:59.168 LIB libspdk_scheduler_gscheduler.a 00:03:59.168 LIB libspdk_scheduler_dpdk_governor.a 00:03:59.168 SO libspdk_scheduler_gscheduler.so.4.0 00:03:59.168 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:59.168 LIB libspdk_scheduler_dynamic.a 00:03:59.428 CC module/accel/error/accel_error_rpc.o 00:03:59.428 SYMLINK libspdk_scheduler_gscheduler.so 00:03:59.428 SO libspdk_scheduler_dynamic.so.4.0 00:03:59.428 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:59.428 LIB libspdk_keyring_linux.a 00:03:59.428 SYMLINK libspdk_scheduler_dynamic.so 00:03:59.428 LIB libspdk_keyring_file.a 00:03:59.428 CC module/fsdev/aio/linux_aio_mgr.o 00:03:59.428 LIB libspdk_blob_bdev.a 00:03:59.428 SO libspdk_keyring_linux.so.1.0 00:03:59.428 SO libspdk_keyring_file.so.2.0 00:03:59.428 SO libspdk_blob_bdev.so.12.0 00:03:59.428 LIB libspdk_accel_error.a 00:03:59.428 SYMLINK libspdk_keyring_linux.so 00:03:59.428 SYMLINK libspdk_keyring_file.so 00:03:59.428 SYMLINK libspdk_blob_bdev.so 00:03:59.428 SO libspdk_accel_error.so.2.0 00:03:59.428 CC module/accel/dsa/accel_dsa.o 00:03:59.428 CC module/accel/ioat/accel_ioat.o 00:03:59.428 CC module/accel/dsa/accel_dsa_rpc.o 00:03:59.428 CC module/accel/iaa/accel_iaa.o 00:03:59.428 SYMLINK libspdk_accel_error.so 00:03:59.428 CC module/accel/iaa/accel_iaa_rpc.o 00:03:59.428 CC module/accel/ioat/accel_ioat_rpc.o 00:03:59.687 CC module/blobfs/bdev/blobfs_bdev.o 00:03:59.687 CC module/bdev/delay/vbdev_delay.o 00:03:59.687 LIB libspdk_accel_ioat.a 00:03:59.687 LIB libspdk_accel_iaa.a 00:03:59.687 SO libspdk_accel_ioat.so.6.0 00:03:59.687 SO libspdk_accel_iaa.so.3.0 00:03:59.687 CC module/bdev/error/vbdev_error.o 00:03:59.687 LIB libspdk_accel_dsa.a 00:03:59.687 SYMLINK libspdk_accel_ioat.so 00:03:59.687 CC module/bdev/error/vbdev_error_rpc.o 00:03:59.687 CC module/bdev/gpt/gpt.o 00:03:59.946 SYMLINK libspdk_accel_iaa.so 00:03:59.946 SO libspdk_accel_dsa.so.5.0 00:03:59.946 CC module/bdev/lvol/vbdev_lvol.o 00:03:59.946 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:59.946 LIB libspdk_fsdev_aio.a 00:03:59.946 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:59.946 SO libspdk_fsdev_aio.so.1.0 00:03:59.946 LIB libspdk_sock_posix.a 00:03:59.946 SYMLINK libspdk_accel_dsa.so 00:03:59.946 SO libspdk_sock_posix.so.6.0 00:03:59.946 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:59.946 SYMLINK libspdk_fsdev_aio.so 00:03:59.946 CC module/bdev/gpt/vbdev_gpt.o 00:03:59.946 LIB libspdk_blobfs_bdev.a 00:03:59.946 SYMLINK libspdk_sock_posix.so 00:03:59.946 SO libspdk_blobfs_bdev.so.6.0 00:03:59.946 LIB libspdk_bdev_error.a 00:04:00.205 SO libspdk_bdev_error.so.6.0 00:04:00.205 SYMLINK libspdk_blobfs_bdev.so 00:04:00.205 LIB libspdk_bdev_delay.a 00:04:00.205 CC module/bdev/malloc/bdev_malloc.o 00:04:00.205 CC module/bdev/null/bdev_null.o 00:04:00.205 SYMLINK libspdk_bdev_error.so 00:04:00.205 SO libspdk_bdev_delay.so.6.0 00:04:00.205 CC module/bdev/nvme/bdev_nvme.o 00:04:00.205 CC module/bdev/passthru/vbdev_passthru.o 00:04:00.205 SYMLINK libspdk_bdev_delay.so 00:04:00.205 CC module/bdev/raid/bdev_raid.o 00:04:00.205 LIB libspdk_bdev_gpt.a 00:04:00.205 SO libspdk_bdev_gpt.so.6.0 00:04:00.205 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:00.205 CC module/bdev/split/vbdev_split.o 00:04:00.465 SYMLINK libspdk_bdev_gpt.so 00:04:00.465 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:00.465 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:00.465 LIB libspdk_bdev_lvol.a 00:04:00.465 SO libspdk_bdev_lvol.so.6.0 00:04:00.465 CC module/bdev/null/bdev_null_rpc.o 00:04:00.465 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:00.465 SYMLINK libspdk_bdev_lvol.so 00:04:00.465 CC module/bdev/raid/bdev_raid_rpc.o 00:04:00.465 CC module/bdev/split/vbdev_split_rpc.o 00:04:00.465 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:00.465 LIB libspdk_bdev_passthru.a 00:04:00.465 SO libspdk_bdev_passthru.so.6.0 00:04:00.724 LIB libspdk_bdev_null.a 00:04:00.724 SYMLINK libspdk_bdev_passthru.so 00:04:00.724 SO libspdk_bdev_null.so.6.0 00:04:00.724 LIB libspdk_bdev_malloc.a 00:04:00.724 LIB libspdk_bdev_split.a 00:04:00.724 CC module/bdev/raid/bdev_raid_sb.o 00:04:00.724 SYMLINK libspdk_bdev_null.so 00:04:00.724 SO libspdk_bdev_malloc.so.6.0 00:04:00.724 SO libspdk_bdev_split.so.6.0 00:04:00.724 LIB libspdk_bdev_zone_block.a 00:04:00.724 CC module/bdev/nvme/nvme_rpc.o 00:04:00.724 CC module/bdev/aio/bdev_aio.o 00:04:00.724 SO libspdk_bdev_zone_block.so.6.0 00:04:00.724 SYMLINK libspdk_bdev_malloc.so 00:04:00.724 SYMLINK libspdk_bdev_split.so 00:04:00.724 CC module/bdev/nvme/bdev_mdns_client.o 00:04:00.724 CC module/bdev/ftl/bdev_ftl.o 00:04:00.724 SYMLINK libspdk_bdev_zone_block.so 00:04:00.724 CC module/bdev/raid/raid0.o 00:04:00.983 CC module/bdev/iscsi/bdev_iscsi.o 00:04:00.983 CC module/bdev/raid/raid1.o 00:04:00.983 CC module/bdev/raid/concat.o 00:04:00.983 CC module/bdev/nvme/vbdev_opal.o 00:04:00.983 CC module/bdev/aio/bdev_aio_rpc.o 00:04:00.983 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:01.242 CC module/bdev/raid/raid5f.o 00:04:01.242 LIB libspdk_bdev_aio.a 00:04:01.242 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:01.242 SO libspdk_bdev_aio.so.6.0 00:04:01.242 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:01.242 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:01.242 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:01.242 SYMLINK libspdk_bdev_aio.so 00:04:01.242 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:01.242 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:01.242 LIB libspdk_bdev_ftl.a 00:04:01.242 SO libspdk_bdev_ftl.so.6.0 00:04:01.501 SYMLINK libspdk_bdev_ftl.so 00:04:01.501 LIB libspdk_bdev_iscsi.a 00:04:01.501 SO libspdk_bdev_iscsi.so.6.0 00:04:01.501 SYMLINK libspdk_bdev_iscsi.so 00:04:01.762 LIB libspdk_bdev_raid.a 00:04:01.762 SO libspdk_bdev_raid.so.6.0 00:04:01.762 SYMLINK libspdk_bdev_raid.so 00:04:01.762 LIB libspdk_bdev_virtio.a 00:04:02.025 SO libspdk_bdev_virtio.so.6.0 00:04:02.025 SYMLINK libspdk_bdev_virtio.so 00:04:02.963 LIB libspdk_bdev_nvme.a 00:04:03.223 SO libspdk_bdev_nvme.so.7.1 00:04:03.223 SYMLINK libspdk_bdev_nvme.so 00:04:03.793 CC module/event/subsystems/sock/sock.o 00:04:03.794 CC module/event/subsystems/keyring/keyring.o 00:04:03.794 CC module/event/subsystems/fsdev/fsdev.o 00:04:03.794 CC module/event/subsystems/iobuf/iobuf.o 00:04:03.794 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:03.794 CC module/event/subsystems/scheduler/scheduler.o 00:04:03.794 CC module/event/subsystems/vmd/vmd.o 00:04:03.794 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:03.794 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:03.794 LIB libspdk_event_fsdev.a 00:04:03.794 LIB libspdk_event_sock.a 00:04:03.794 LIB libspdk_event_keyring.a 00:04:03.794 LIB libspdk_event_vhost_blk.a 00:04:03.794 LIB libspdk_event_scheduler.a 00:04:03.794 LIB libspdk_event_iobuf.a 00:04:04.053 SO libspdk_event_fsdev.so.1.0 00:04:04.053 LIB libspdk_event_vmd.a 00:04:04.053 SO libspdk_event_sock.so.5.0 00:04:04.053 SO libspdk_event_keyring.so.1.0 00:04:04.053 SO libspdk_event_vhost_blk.so.3.0 00:04:04.053 SO libspdk_event_scheduler.so.4.0 00:04:04.053 SO libspdk_event_iobuf.so.3.0 00:04:04.053 SO libspdk_event_vmd.so.6.0 00:04:04.053 SYMLINK libspdk_event_sock.so 00:04:04.053 SYMLINK libspdk_event_fsdev.so 00:04:04.053 SYMLINK libspdk_event_keyring.so 00:04:04.053 SYMLINK libspdk_event_vhost_blk.so 00:04:04.053 SYMLINK libspdk_event_scheduler.so 00:04:04.053 SYMLINK libspdk_event_iobuf.so 00:04:04.053 SYMLINK libspdk_event_vmd.so 00:04:04.312 CC module/event/subsystems/accel/accel.o 00:04:04.573 LIB libspdk_event_accel.a 00:04:04.573 SO libspdk_event_accel.so.6.0 00:04:04.573 SYMLINK libspdk_event_accel.so 00:04:05.142 CC module/event/subsystems/bdev/bdev.o 00:04:05.142 LIB libspdk_event_bdev.a 00:04:05.142 SO libspdk_event_bdev.so.6.0 00:04:05.400 SYMLINK libspdk_event_bdev.so 00:04:05.659 CC module/event/subsystems/ublk/ublk.o 00:04:05.659 CC module/event/subsystems/nbd/nbd.o 00:04:05.659 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:05.659 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:05.659 CC module/event/subsystems/scsi/scsi.o 00:04:05.659 LIB libspdk_event_ublk.a 00:04:05.917 LIB libspdk_event_nbd.a 00:04:05.917 LIB libspdk_event_scsi.a 00:04:05.917 SO libspdk_event_ublk.so.3.0 00:04:05.917 SO libspdk_event_nbd.so.6.0 00:04:05.917 SO libspdk_event_scsi.so.6.0 00:04:05.917 SYMLINK libspdk_event_ublk.so 00:04:05.917 SYMLINK libspdk_event_scsi.so 00:04:05.917 SYMLINK libspdk_event_nbd.so 00:04:05.917 LIB libspdk_event_nvmf.a 00:04:05.917 SO libspdk_event_nvmf.so.6.0 00:04:05.917 SYMLINK libspdk_event_nvmf.so 00:04:06.175 CC module/event/subsystems/iscsi/iscsi.o 00:04:06.175 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:06.433 LIB libspdk_event_vhost_scsi.a 00:04:06.433 LIB libspdk_event_iscsi.a 00:04:06.433 SO libspdk_event_vhost_scsi.so.3.0 00:04:06.433 SO libspdk_event_iscsi.so.6.0 00:04:06.433 SYMLINK libspdk_event_vhost_scsi.so 00:04:06.433 SYMLINK libspdk_event_iscsi.so 00:04:06.691 SO libspdk.so.6.0 00:04:06.691 SYMLINK libspdk.so 00:04:06.949 CXX app/trace/trace.o 00:04:06.949 CC app/trace_record/trace_record.o 00:04:06.949 CC app/nvmf_tgt/nvmf_main.o 00:04:06.949 CC app/iscsi_tgt/iscsi_tgt.o 00:04:06.949 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:07.207 CC examples/util/zipf/zipf.o 00:04:07.207 CC test/thread/poller_perf/poller_perf.o 00:04:07.207 CC examples/ioat/perf/perf.o 00:04:07.207 CC test/dma/test_dma/test_dma.o 00:04:07.207 CC test/app/bdev_svc/bdev_svc.o 00:04:07.207 LINK poller_perf 00:04:07.207 LINK zipf 00:04:07.207 LINK nvmf_tgt 00:04:07.207 LINK interrupt_tgt 00:04:07.465 LINK spdk_trace_record 00:04:07.465 LINK ioat_perf 00:04:07.465 LINK iscsi_tgt 00:04:07.465 LINK bdev_svc 00:04:07.465 LINK spdk_trace 00:04:07.465 CC examples/ioat/verify/verify.o 00:04:07.724 CC app/spdk_lspci/spdk_lspci.o 00:04:07.724 TEST_HEADER include/spdk/accel.h 00:04:07.724 TEST_HEADER include/spdk/accel_module.h 00:04:07.724 TEST_HEADER include/spdk/assert.h 00:04:07.724 TEST_HEADER include/spdk/barrier.h 00:04:07.724 TEST_HEADER include/spdk/base64.h 00:04:07.724 TEST_HEADER include/spdk/bdev.h 00:04:07.724 TEST_HEADER include/spdk/bdev_module.h 00:04:07.724 TEST_HEADER include/spdk/bdev_zone.h 00:04:07.724 TEST_HEADER include/spdk/bit_array.h 00:04:07.724 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:07.724 TEST_HEADER include/spdk/bit_pool.h 00:04:07.724 TEST_HEADER include/spdk/blob_bdev.h 00:04:07.724 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:07.724 TEST_HEADER include/spdk/blobfs.h 00:04:07.724 TEST_HEADER include/spdk/blob.h 00:04:07.724 TEST_HEADER include/spdk/conf.h 00:04:07.724 TEST_HEADER include/spdk/config.h 00:04:07.724 TEST_HEADER include/spdk/cpuset.h 00:04:07.724 TEST_HEADER include/spdk/crc16.h 00:04:07.724 TEST_HEADER include/spdk/crc32.h 00:04:07.724 TEST_HEADER include/spdk/crc64.h 00:04:07.724 TEST_HEADER include/spdk/dif.h 00:04:07.724 TEST_HEADER include/spdk/dma.h 00:04:07.724 TEST_HEADER include/spdk/endian.h 00:04:07.724 TEST_HEADER include/spdk/env_dpdk.h 00:04:07.724 TEST_HEADER include/spdk/env.h 00:04:07.724 TEST_HEADER include/spdk/event.h 00:04:07.724 TEST_HEADER include/spdk/fd_group.h 00:04:07.724 TEST_HEADER include/spdk/fd.h 00:04:07.724 TEST_HEADER include/spdk/file.h 00:04:07.724 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:07.724 TEST_HEADER include/spdk/fsdev.h 00:04:07.724 TEST_HEADER include/spdk/fsdev_module.h 00:04:07.724 TEST_HEADER include/spdk/ftl.h 00:04:07.724 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:07.724 TEST_HEADER include/spdk/gpt_spec.h 00:04:07.724 TEST_HEADER include/spdk/hexlify.h 00:04:07.724 TEST_HEADER include/spdk/histogram_data.h 00:04:07.724 TEST_HEADER include/spdk/idxd.h 00:04:07.724 TEST_HEADER include/spdk/idxd_spec.h 00:04:07.724 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:07.724 TEST_HEADER include/spdk/init.h 00:04:07.724 TEST_HEADER include/spdk/ioat.h 00:04:07.724 TEST_HEADER include/spdk/ioat_spec.h 00:04:07.724 TEST_HEADER include/spdk/iscsi_spec.h 00:04:07.724 CC app/spdk_tgt/spdk_tgt.o 00:04:07.724 TEST_HEADER include/spdk/json.h 00:04:07.724 TEST_HEADER include/spdk/jsonrpc.h 00:04:07.724 TEST_HEADER include/spdk/keyring.h 00:04:07.724 TEST_HEADER include/spdk/keyring_module.h 00:04:07.724 TEST_HEADER include/spdk/likely.h 00:04:07.724 TEST_HEADER include/spdk/log.h 00:04:07.724 TEST_HEADER include/spdk/lvol.h 00:04:07.724 TEST_HEADER include/spdk/md5.h 00:04:07.724 TEST_HEADER include/spdk/memory.h 00:04:07.724 TEST_HEADER include/spdk/mmio.h 00:04:07.724 CC examples/thread/thread/thread_ex.o 00:04:07.724 TEST_HEADER include/spdk/nbd.h 00:04:07.724 TEST_HEADER include/spdk/net.h 00:04:07.724 TEST_HEADER include/spdk/notify.h 00:04:07.724 TEST_HEADER include/spdk/nvme.h 00:04:07.724 TEST_HEADER include/spdk/nvme_intel.h 00:04:07.724 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:07.724 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:07.724 TEST_HEADER include/spdk/nvme_spec.h 00:04:07.724 TEST_HEADER include/spdk/nvme_zns.h 00:04:07.724 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:07.724 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:07.724 TEST_HEADER include/spdk/nvmf.h 00:04:07.724 TEST_HEADER include/spdk/nvmf_spec.h 00:04:07.724 TEST_HEADER include/spdk/nvmf_transport.h 00:04:07.724 TEST_HEADER include/spdk/opal.h 00:04:07.724 TEST_HEADER include/spdk/opal_spec.h 00:04:07.724 LINK test_dma 00:04:07.724 TEST_HEADER include/spdk/pci_ids.h 00:04:07.724 TEST_HEADER include/spdk/pipe.h 00:04:07.724 TEST_HEADER include/spdk/queue.h 00:04:07.724 TEST_HEADER include/spdk/reduce.h 00:04:07.724 TEST_HEADER include/spdk/rpc.h 00:04:07.724 TEST_HEADER include/spdk/scheduler.h 00:04:07.724 TEST_HEADER include/spdk/scsi.h 00:04:07.724 LINK spdk_lspci 00:04:07.724 TEST_HEADER include/spdk/scsi_spec.h 00:04:07.724 TEST_HEADER include/spdk/sock.h 00:04:07.724 TEST_HEADER include/spdk/stdinc.h 00:04:07.724 TEST_HEADER include/spdk/string.h 00:04:07.724 TEST_HEADER include/spdk/thread.h 00:04:07.724 TEST_HEADER include/spdk/trace.h 00:04:07.724 TEST_HEADER include/spdk/trace_parser.h 00:04:07.724 TEST_HEADER include/spdk/tree.h 00:04:07.724 TEST_HEADER include/spdk/ublk.h 00:04:07.724 TEST_HEADER include/spdk/util.h 00:04:07.724 TEST_HEADER include/spdk/uuid.h 00:04:07.724 TEST_HEADER include/spdk/version.h 00:04:07.724 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:07.724 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:07.724 TEST_HEADER include/spdk/vhost.h 00:04:07.724 TEST_HEADER include/spdk/vmd.h 00:04:07.724 TEST_HEADER include/spdk/xor.h 00:04:07.724 TEST_HEADER include/spdk/zipf.h 00:04:07.724 CXX test/cpp_headers/accel.o 00:04:07.724 LINK verify 00:04:07.724 CC examples/sock/hello_world/hello_sock.o 00:04:07.724 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:07.982 LINK spdk_tgt 00:04:07.982 CXX test/cpp_headers/accel_module.o 00:04:07.983 LINK thread 00:04:07.983 CC app/spdk_nvme_perf/perf.o 00:04:07.983 CC app/spdk_nvme_identify/identify.o 00:04:07.983 LINK nvme_fuzz 00:04:07.983 LINK hello_sock 00:04:08.241 CC examples/vmd/lsvmd/lsvmd.o 00:04:08.241 CXX test/cpp_headers/assert.o 00:04:08.241 CXX test/cpp_headers/barrier.o 00:04:08.241 LINK lsvmd 00:04:08.241 CC app/spdk_nvme_discover/discovery_aer.o 00:04:08.241 CXX test/cpp_headers/base64.o 00:04:08.241 LINK vhost_fuzz 00:04:08.241 CC examples/vmd/led/led.o 00:04:08.499 CC test/event/event_perf/event_perf.o 00:04:08.499 CXX test/cpp_headers/bdev.o 00:04:08.499 CXX test/cpp_headers/bdev_module.o 00:04:08.499 LINK led 00:04:08.499 CC test/env/mem_callbacks/mem_callbacks.o 00:04:08.499 LINK spdk_nvme_discover 00:04:08.499 CC test/event/reactor/reactor.o 00:04:08.499 LINK event_perf 00:04:08.758 CXX test/cpp_headers/bdev_zone.o 00:04:08.758 LINK reactor 00:04:08.758 CXX test/cpp_headers/bit_array.o 00:04:08.758 CC app/spdk_top/spdk_top.o 00:04:08.758 CC examples/idxd/perf/perf.o 00:04:08.758 CXX test/cpp_headers/bit_pool.o 00:04:09.016 CC test/nvme/aer/aer.o 00:04:09.016 CC test/event/reactor_perf/reactor_perf.o 00:04:09.016 CC test/nvme/reset/reset.o 00:04:09.016 LINK spdk_nvme_perf 00:04:09.016 LINK spdk_nvme_identify 00:04:09.016 CXX test/cpp_headers/blob_bdev.o 00:04:09.016 LINK mem_callbacks 00:04:09.016 LINK reactor_perf 00:04:09.276 LINK idxd_perf 00:04:09.276 LINK aer 00:04:09.276 CXX test/cpp_headers/blobfs_bdev.o 00:04:09.276 LINK reset 00:04:09.276 CC test/nvme/sgl/sgl.o 00:04:09.276 CC test/env/vtophys/vtophys.o 00:04:09.276 CC test/nvme/e2edp/nvme_dp.o 00:04:09.276 CC test/event/app_repeat/app_repeat.o 00:04:09.567 CXX test/cpp_headers/blobfs.o 00:04:09.567 CXX test/cpp_headers/blob.o 00:04:09.567 LINK vtophys 00:04:09.567 LINK app_repeat 00:04:09.567 CC test/event/scheduler/scheduler.o 00:04:09.567 LINK sgl 00:04:09.567 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:09.567 LINK nvme_dp 00:04:09.567 CXX test/cpp_headers/conf.o 00:04:09.839 LINK iscsi_fuzz 00:04:09.839 CC test/app/histogram_perf/histogram_perf.o 00:04:09.839 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:09.839 CXX test/cpp_headers/config.o 00:04:09.839 CC test/app/jsoncat/jsoncat.o 00:04:09.839 CXX test/cpp_headers/cpuset.o 00:04:09.839 LINK spdk_top 00:04:09.839 LINK scheduler 00:04:09.839 CC test/nvme/overhead/overhead.o 00:04:09.839 LINK histogram_perf 00:04:09.839 CC test/nvme/err_injection/err_injection.o 00:04:09.839 LINK hello_fsdev 00:04:09.839 LINK env_dpdk_post_init 00:04:09.839 LINK jsoncat 00:04:09.839 CXX test/cpp_headers/crc16.o 00:04:10.099 CC test/nvme/startup/startup.o 00:04:10.099 LINK err_injection 00:04:10.099 CC test/nvme/reserve/reserve.o 00:04:10.099 CC app/vhost/vhost.o 00:04:10.099 CXX test/cpp_headers/crc32.o 00:04:10.099 CC app/spdk_dd/spdk_dd.o 00:04:10.099 LINK overhead 00:04:10.099 CC test/env/memory/memory_ut.o 00:04:10.099 LINK startup 00:04:10.099 CC test/app/stub/stub.o 00:04:10.099 CC examples/accel/perf/accel_perf.o 00:04:10.359 CXX test/cpp_headers/crc64.o 00:04:10.359 LINK vhost 00:04:10.359 CC test/nvme/simple_copy/simple_copy.o 00:04:10.359 CXX test/cpp_headers/dif.o 00:04:10.359 LINK reserve 00:04:10.359 CXX test/cpp_headers/dma.o 00:04:10.359 LINK stub 00:04:10.359 CXX test/cpp_headers/endian.o 00:04:10.359 CXX test/cpp_headers/env_dpdk.o 00:04:10.359 CXX test/cpp_headers/env.o 00:04:10.359 CC test/rpc_client/rpc_client_test.o 00:04:10.359 LINK spdk_dd 00:04:10.618 LINK simple_copy 00:04:10.618 CC test/nvme/connect_stress/connect_stress.o 00:04:10.618 CC test/nvme/boot_partition/boot_partition.o 00:04:10.618 CXX test/cpp_headers/event.o 00:04:10.618 LINK rpc_client_test 00:04:10.618 CC test/nvme/compliance/nvme_compliance.o 00:04:10.618 CC test/nvme/fused_ordering/fused_ordering.o 00:04:10.618 CXX test/cpp_headers/fd_group.o 00:04:10.619 LINK boot_partition 00:04:10.619 LINK accel_perf 00:04:10.880 LINK connect_stress 00:04:10.880 CXX test/cpp_headers/fd.o 00:04:10.880 CC app/fio/nvme/fio_plugin.o 00:04:10.880 LINK fused_ordering 00:04:10.880 CC app/fio/bdev/fio_plugin.o 00:04:10.880 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:10.880 CXX test/cpp_headers/file.o 00:04:10.880 CC test/nvme/fdp/fdp.o 00:04:11.140 LINK nvme_compliance 00:04:11.140 CC test/accel/dif/dif.o 00:04:11.140 LINK doorbell_aers 00:04:11.140 CXX test/cpp_headers/fsdev.o 00:04:11.140 CC examples/blob/hello_world/hello_blob.o 00:04:11.140 CC examples/blob/cli/blobcli.o 00:04:11.398 CXX test/cpp_headers/fsdev_module.o 00:04:11.398 LINK memory_ut 00:04:11.398 LINK hello_blob 00:04:11.398 CC test/blobfs/mkfs/mkfs.o 00:04:11.398 LINK spdk_bdev 00:04:11.398 LINK spdk_nvme 00:04:11.398 LINK fdp 00:04:11.398 CXX test/cpp_headers/ftl.o 00:04:11.398 CC examples/nvme/hello_world/hello_world.o 00:04:11.657 LINK mkfs 00:04:11.657 CC test/nvme/cuse/cuse.o 00:04:11.657 CC test/env/pci/pci_ut.o 00:04:11.657 CXX test/cpp_headers/fuse_dispatcher.o 00:04:11.657 CC examples/nvme/reconnect/reconnect.o 00:04:11.657 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:11.657 LINK hello_world 00:04:11.657 CC examples/nvme/arbitration/arbitration.o 00:04:11.657 LINK blobcli 00:04:11.657 CXX test/cpp_headers/gpt_spec.o 00:04:11.915 LINK dif 00:04:11.915 CC examples/nvme/hotplug/hotplug.o 00:04:11.915 CXX test/cpp_headers/hexlify.o 00:04:11.915 LINK reconnect 00:04:11.915 CC test/lvol/esnap/esnap.o 00:04:11.915 LINK pci_ut 00:04:11.915 LINK arbitration 00:04:12.174 CXX test/cpp_headers/histogram_data.o 00:04:12.174 CC examples/bdev/hello_world/hello_bdev.o 00:04:12.174 LINK hotplug 00:04:12.174 LINK nvme_manage 00:04:12.174 CC examples/bdev/bdevperf/bdevperf.o 00:04:12.174 CXX test/cpp_headers/idxd.o 00:04:12.174 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:12.174 CC examples/nvme/abort/abort.o 00:04:12.174 CXX test/cpp_headers/idxd_spec.o 00:04:12.433 LINK hello_bdev 00:04:12.433 CXX test/cpp_headers/init.o 00:04:12.433 CXX test/cpp_headers/ioat.o 00:04:12.433 CXX test/cpp_headers/ioat_spec.o 00:04:12.433 LINK cmb_copy 00:04:12.433 CXX test/cpp_headers/iscsi_spec.o 00:04:12.433 CXX test/cpp_headers/json.o 00:04:12.692 CC test/bdev/bdevio/bdevio.o 00:04:12.692 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:12.692 CXX test/cpp_headers/jsonrpc.o 00:04:12.692 CXX test/cpp_headers/keyring.o 00:04:12.692 CXX test/cpp_headers/keyring_module.o 00:04:12.692 CXX test/cpp_headers/likely.o 00:04:12.692 LINK abort 00:04:12.692 CXX test/cpp_headers/log.o 00:04:12.692 LINK pmr_persistence 00:04:12.692 CXX test/cpp_headers/lvol.o 00:04:12.692 CXX test/cpp_headers/md5.o 00:04:12.952 CXX test/cpp_headers/memory.o 00:04:12.952 CXX test/cpp_headers/mmio.o 00:04:12.952 CXX test/cpp_headers/nbd.o 00:04:12.952 CXX test/cpp_headers/net.o 00:04:12.952 CXX test/cpp_headers/notify.o 00:04:12.952 CXX test/cpp_headers/nvme.o 00:04:12.952 CXX test/cpp_headers/nvme_intel.o 00:04:12.952 LINK cuse 00:04:12.952 LINK bdevio 00:04:12.952 CXX test/cpp_headers/nvme_ocssd.o 00:04:12.952 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:13.211 CXX test/cpp_headers/nvme_spec.o 00:04:13.211 CXX test/cpp_headers/nvme_zns.o 00:04:13.211 CXX test/cpp_headers/nvmf_cmd.o 00:04:13.211 LINK bdevperf 00:04:13.211 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:13.211 CXX test/cpp_headers/nvmf.o 00:04:13.211 CXX test/cpp_headers/nvmf_spec.o 00:04:13.211 CXX test/cpp_headers/nvmf_transport.o 00:04:13.211 CXX test/cpp_headers/opal.o 00:04:13.211 CXX test/cpp_headers/opal_spec.o 00:04:13.211 CXX test/cpp_headers/pci_ids.o 00:04:13.471 CXX test/cpp_headers/pipe.o 00:04:13.471 CXX test/cpp_headers/queue.o 00:04:13.471 CXX test/cpp_headers/reduce.o 00:04:13.471 CXX test/cpp_headers/rpc.o 00:04:13.471 CXX test/cpp_headers/scheduler.o 00:04:13.471 CXX test/cpp_headers/scsi.o 00:04:13.471 CXX test/cpp_headers/scsi_spec.o 00:04:13.471 CXX test/cpp_headers/sock.o 00:04:13.471 CXX test/cpp_headers/stdinc.o 00:04:13.471 CXX test/cpp_headers/string.o 00:04:13.471 CXX test/cpp_headers/thread.o 00:04:13.471 CXX test/cpp_headers/trace.o 00:04:13.471 CXX test/cpp_headers/trace_parser.o 00:04:13.471 CC examples/nvmf/nvmf/nvmf.o 00:04:13.471 CXX test/cpp_headers/tree.o 00:04:13.471 CXX test/cpp_headers/ublk.o 00:04:13.471 CXX test/cpp_headers/util.o 00:04:13.730 CXX test/cpp_headers/uuid.o 00:04:13.730 CXX test/cpp_headers/version.o 00:04:13.730 CXX test/cpp_headers/vfio_user_pci.o 00:04:13.730 CXX test/cpp_headers/vfio_user_spec.o 00:04:13.730 CXX test/cpp_headers/vhost.o 00:04:13.730 CXX test/cpp_headers/vmd.o 00:04:13.730 CXX test/cpp_headers/xor.o 00:04:13.730 CXX test/cpp_headers/zipf.o 00:04:13.989 LINK nvmf 00:04:18.200 LINK esnap 00:04:18.200 00:04:18.200 real 1m23.232s 00:04:18.200 user 7m21.173s 00:04:18.200 sys 1m29.167s 00:04:18.200 07:36:07 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:18.200 07:36:07 make -- common/autotest_common.sh@10 -- $ set +x 00:04:18.200 ************************************ 00:04:18.200 END TEST make 00:04:18.200 ************************************ 00:04:18.200 07:36:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:18.200 07:36:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:18.200 07:36:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:18.200 07:36:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.200 07:36:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:18.200 07:36:07 -- pm/common@44 -- $ pid=5461 00:04:18.200 07:36:07 -- pm/common@50 -- $ kill -TERM 5461 00:04:18.200 07:36:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.200 07:36:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:18.200 07:36:07 -- pm/common@44 -- $ pid=5463 00:04:18.200 07:36:07 -- pm/common@50 -- $ kill -TERM 5463 00:04:18.200 07:36:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:18.200 07:36:07 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:18.200 07:36:07 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.200 07:36:07 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.200 07:36:07 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:18.200 07:36:08 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:18.200 07:36:08 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.200 07:36:08 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.200 07:36:08 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.200 07:36:08 -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.200 07:36:08 -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.200 07:36:08 -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.200 07:36:08 -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.200 07:36:08 -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.200 07:36:08 -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.200 07:36:08 -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.200 07:36:08 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.200 07:36:08 -- scripts/common.sh@344 -- # case "$op" in 00:04:18.200 07:36:08 -- scripts/common.sh@345 -- # : 1 00:04:18.200 07:36:08 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.200 07:36:08 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.200 07:36:08 -- scripts/common.sh@365 -- # decimal 1 00:04:18.200 07:36:08 -- scripts/common.sh@353 -- # local d=1 00:04:18.200 07:36:08 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.200 07:36:08 -- scripts/common.sh@355 -- # echo 1 00:04:18.200 07:36:08 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.200 07:36:08 -- scripts/common.sh@366 -- # decimal 2 00:04:18.201 07:36:08 -- scripts/common.sh@353 -- # local d=2 00:04:18.201 07:36:08 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.201 07:36:08 -- scripts/common.sh@355 -- # echo 2 00:04:18.201 07:36:08 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.201 07:36:08 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.201 07:36:08 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.201 07:36:08 -- scripts/common.sh@368 -- # return 0 00:04:18.201 07:36:08 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.201 07:36:08 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:18.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.201 --rc genhtml_branch_coverage=1 00:04:18.201 --rc genhtml_function_coverage=1 00:04:18.201 --rc genhtml_legend=1 00:04:18.201 --rc geninfo_all_blocks=1 00:04:18.201 --rc geninfo_unexecuted_blocks=1 00:04:18.201 00:04:18.201 ' 00:04:18.201 07:36:08 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:18.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.201 --rc genhtml_branch_coverage=1 00:04:18.201 --rc genhtml_function_coverage=1 00:04:18.201 --rc genhtml_legend=1 00:04:18.201 --rc geninfo_all_blocks=1 00:04:18.201 --rc geninfo_unexecuted_blocks=1 00:04:18.201 00:04:18.201 ' 00:04:18.201 07:36:08 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:18.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.201 --rc genhtml_branch_coverage=1 00:04:18.201 --rc genhtml_function_coverage=1 00:04:18.201 --rc genhtml_legend=1 00:04:18.201 --rc geninfo_all_blocks=1 00:04:18.201 --rc geninfo_unexecuted_blocks=1 00:04:18.201 00:04:18.201 ' 00:04:18.201 07:36:08 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:18.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.201 --rc genhtml_branch_coverage=1 00:04:18.201 --rc genhtml_function_coverage=1 00:04:18.201 --rc genhtml_legend=1 00:04:18.201 --rc geninfo_all_blocks=1 00:04:18.201 --rc geninfo_unexecuted_blocks=1 00:04:18.201 00:04:18.201 ' 00:04:18.201 07:36:08 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:18.201 07:36:08 -- nvmf/common.sh@7 -- # uname -s 00:04:18.201 07:36:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:18.201 07:36:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:18.201 07:36:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:18.201 07:36:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:18.201 07:36:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:18.201 07:36:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:18.201 07:36:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:18.201 07:36:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:18.201 07:36:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:18.201 07:36:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:18.201 07:36:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8a55aa8-6913-4d26-998f-a1da9bb68def 00:04:18.201 07:36:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=c8a55aa8-6913-4d26-998f-a1da9bb68def 00:04:18.201 07:36:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:18.201 07:36:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:18.201 07:36:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:18.201 07:36:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:18.201 07:36:08 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:18.201 07:36:08 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:18.201 07:36:08 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:18.201 07:36:08 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:18.201 07:36:08 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:18.201 07:36:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.201 07:36:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.201 07:36:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.201 07:36:08 -- paths/export.sh@5 -- # export PATH 00:04:18.201 07:36:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.201 07:36:08 -- nvmf/common.sh@51 -- # : 0 00:04:18.201 07:36:08 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:18.201 07:36:08 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:18.201 07:36:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:18.201 07:36:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:18.201 07:36:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:18.201 07:36:08 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:18.201 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:18.201 07:36:08 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:18.201 07:36:08 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:18.201 07:36:08 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:18.201 07:36:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:18.201 07:36:08 -- spdk/autotest.sh@32 -- # uname -s 00:04:18.201 07:36:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:18.201 07:36:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:18.201 07:36:08 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:18.201 07:36:08 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:18.201 07:36:08 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:18.201 07:36:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:18.461 07:36:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:18.461 07:36:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:18.461 07:36:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:18.461 07:36:08 -- spdk/autotest.sh@48 -- # udevadm_pid=54408 00:04:18.461 07:36:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:18.461 07:36:08 -- pm/common@17 -- # local monitor 00:04:18.461 07:36:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.461 07:36:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.461 07:36:08 -- pm/common@21 -- # date +%s 00:04:18.461 07:36:08 -- pm/common@25 -- # sleep 1 00:04:18.461 07:36:08 -- pm/common@21 -- # date +%s 00:04:18.461 07:36:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732865768 00:04:18.461 07:36:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732865768 00:04:18.461 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732865768_collect-vmstat.pm.log 00:04:18.461 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732865768_collect-cpu-load.pm.log 00:04:19.400 07:36:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:19.400 07:36:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:19.400 07:36:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.400 07:36:09 -- common/autotest_common.sh@10 -- # set +x 00:04:19.400 07:36:09 -- spdk/autotest.sh@59 -- # create_test_list 00:04:19.400 07:36:09 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:19.400 07:36:09 -- common/autotest_common.sh@10 -- # set +x 00:04:19.400 07:36:09 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:19.400 07:36:09 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:19.400 07:36:09 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:19.400 07:36:09 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:19.400 07:36:09 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:19.400 07:36:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:19.400 07:36:09 -- common/autotest_common.sh@1457 -- # uname 00:04:19.400 07:36:09 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:19.400 07:36:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:19.400 07:36:09 -- common/autotest_common.sh@1477 -- # uname 00:04:19.400 07:36:09 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:19.400 07:36:09 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:19.400 07:36:09 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:19.400 lcov: LCOV version 1.15 00:04:19.400 07:36:09 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:34.323 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:34.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:49.212 07:36:38 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:49.212 07:36:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.212 07:36:38 -- common/autotest_common.sh@10 -- # set +x 00:04:49.212 07:36:38 -- spdk/autotest.sh@78 -- # rm -f 00:04:49.212 07:36:38 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.212 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:49.212 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:49.212 07:36:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:49.212 07:36:39 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:49.212 07:36:39 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:49.212 07:36:39 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:49.212 07:36:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:49.212 07:36:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:49.212 07:36:39 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:49.212 07:36:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.212 07:36:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:49.212 07:36:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:49.212 07:36:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:49.212 07:36:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:49.212 07:36:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:49.212 07:36:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:49.212 07:36:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:49.212 07:36:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:49.212 07:36:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:49.212 07:36:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:49.212 07:36:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:49.212 07:36:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:49.212 07:36:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:49.212 07:36:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:49.212 07:36:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:49.212 07:36:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:49.212 07:36:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:49.212 07:36:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.212 07:36:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:49.212 07:36:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:49.212 07:36:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:49.212 07:36:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:49.212 No valid GPT data, bailing 00:04:49.212 07:36:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:49.212 07:36:39 -- scripts/common.sh@394 -- # pt= 00:04:49.212 07:36:39 -- scripts/common.sh@395 -- # return 1 00:04:49.212 07:36:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:49.212 1+0 records in 00:04:49.212 1+0 records out 00:04:49.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00635331 s, 165 MB/s 00:04:49.212 07:36:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.212 07:36:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:49.212 07:36:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:49.212 07:36:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:49.212 07:36:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:49.472 No valid GPT data, bailing 00:04:49.473 07:36:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:49.473 07:36:39 -- scripts/common.sh@394 -- # pt= 00:04:49.473 07:36:39 -- scripts/common.sh@395 -- # return 1 00:04:49.473 07:36:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:49.473 1+0 records in 00:04:49.473 1+0 records out 00:04:49.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00664621 s, 158 MB/s 00:04:49.473 07:36:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.473 07:36:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:49.473 07:36:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:49.473 07:36:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:49.473 07:36:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:49.473 No valid GPT data, bailing 00:04:49.473 07:36:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:49.473 07:36:39 -- scripts/common.sh@394 -- # pt= 00:04:49.473 07:36:39 -- scripts/common.sh@395 -- # return 1 00:04:49.473 07:36:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:49.473 1+0 records in 00:04:49.473 1+0 records out 00:04:49.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00628234 s, 167 MB/s 00:04:49.473 07:36:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.473 07:36:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:49.473 07:36:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:49.473 07:36:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:49.473 07:36:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:49.473 No valid GPT data, bailing 00:04:49.473 07:36:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:49.473 07:36:39 -- scripts/common.sh@394 -- # pt= 00:04:49.473 07:36:39 -- scripts/common.sh@395 -- # return 1 00:04:49.473 07:36:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:49.473 1+0 records in 00:04:49.473 1+0 records out 00:04:49.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610752 s, 172 MB/s 00:04:49.473 07:36:39 -- spdk/autotest.sh@105 -- # sync 00:04:49.733 07:36:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:49.733 07:36:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:49.733 07:36:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:53.028 07:36:42 -- spdk/autotest.sh@111 -- # uname -s 00:04:53.028 07:36:42 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:53.028 07:36:42 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:53.028 07:36:42 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:53.287 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.287 Hugepages 00:04:53.287 node hugesize free / total 00:04:53.287 node0 1048576kB 0 / 0 00:04:53.287 node0 2048kB 0 / 0 00:04:53.287 00:04:53.287 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:53.287 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:53.546 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:53.546 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:53.546 07:36:43 -- spdk/autotest.sh@117 -- # uname -s 00:04:53.546 07:36:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:53.546 07:36:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:53.546 07:36:43 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.483 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.483 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.483 07:36:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:55.862 07:36:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:55.862 07:36:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:55.862 07:36:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:55.862 07:36:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:55.862 07:36:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:55.862 07:36:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:55.862 07:36:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.862 07:36:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:55.862 07:36:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:55.862 07:36:45 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:55.862 07:36:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:55.862 07:36:45 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:56.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.122 Waiting for block devices as requested 00:04:56.382 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:56.382 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:56.382 07:36:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:56.382 07:36:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:56.382 07:36:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:56.382 07:36:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:56.382 07:36:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:56.382 07:36:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:56.382 07:36:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:56.382 07:36:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:56.382 07:36:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:56.382 07:36:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:56.382 07:36:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:56.382 07:36:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:56.382 07:36:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:56.382 07:36:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:56.382 07:36:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:56.382 07:36:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:56.382 07:36:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:56.382 07:36:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:56.382 07:36:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:56.382 07:36:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:56.382 07:36:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:56.382 07:36:46 -- common/autotest_common.sh@1543 -- # continue 00:04:56.382 07:36:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:56.382 07:36:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:56.382 07:36:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:56.382 07:36:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:56.382 07:36:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:56.382 07:36:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:56.382 07:36:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:56.382 07:36:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:56.382 07:36:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:56.382 07:36:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:56.382 07:36:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:56.382 07:36:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:56.382 07:36:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:56.382 07:36:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:56.382 07:36:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:56.382 07:36:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:56.382 07:36:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:56.382 07:36:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:56.382 07:36:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:56.382 07:36:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:56.382 07:36:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:56.382 07:36:46 -- common/autotest_common.sh@1543 -- # continue 00:04:56.382 07:36:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:56.382 07:36:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.382 07:36:46 -- common/autotest_common.sh@10 -- # set +x 00:04:56.641 07:36:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:56.641 07:36:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.641 07:36:46 -- common/autotest_common.sh@10 -- # set +x 00:04:56.641 07:36:46 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.470 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.470 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.470 07:36:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:57.470 07:36:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.470 07:36:47 -- common/autotest_common.sh@10 -- # set +x 00:04:57.729 07:36:47 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:57.729 07:36:47 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:57.729 07:36:47 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:57.729 07:36:47 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:57.729 07:36:47 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:57.729 07:36:47 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:57.729 07:36:47 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:57.729 07:36:47 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:57.729 07:36:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:57.729 07:36:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:57.729 07:36:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.729 07:36:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:57.729 07:36:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:57.729 07:36:47 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:57.729 07:36:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:57.729 07:36:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:57.729 07:36:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:57.729 07:36:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:57.729 07:36:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:57.729 07:36:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:57.729 07:36:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:57.729 07:36:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:57.729 07:36:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:57.729 07:36:47 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:57.729 07:36:47 -- common/autotest_common.sh@1572 -- # return 0 00:04:57.729 07:36:47 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:57.729 07:36:47 -- common/autotest_common.sh@1580 -- # return 0 00:04:57.729 07:36:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:57.729 07:36:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:57.729 07:36:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:57.729 07:36:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:57.729 07:36:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:57.729 07:36:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.729 07:36:47 -- common/autotest_common.sh@10 -- # set +x 00:04:57.729 07:36:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:57.729 07:36:47 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:57.729 07:36:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.729 07:36:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.729 07:36:47 -- common/autotest_common.sh@10 -- # set +x 00:04:57.729 ************************************ 00:04:57.729 START TEST env 00:04:57.729 ************************************ 00:04:57.729 07:36:47 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:57.729 * Looking for test storage... 00:04:57.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:57.997 07:36:47 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.997 07:36:47 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.997 07:36:47 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.997 07:36:47 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.997 07:36:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.997 07:36:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.997 07:36:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.997 07:36:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.997 07:36:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.997 07:36:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.997 07:36:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.997 07:36:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.997 07:36:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.997 07:36:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.997 07:36:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.997 07:36:47 env -- scripts/common.sh@344 -- # case "$op" in 00:04:57.997 07:36:47 env -- scripts/common.sh@345 -- # : 1 00:04:57.997 07:36:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.997 07:36:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.997 07:36:47 env -- scripts/common.sh@365 -- # decimal 1 00:04:57.997 07:36:47 env -- scripts/common.sh@353 -- # local d=1 00:04:57.997 07:36:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.997 07:36:47 env -- scripts/common.sh@355 -- # echo 1 00:04:57.997 07:36:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.997 07:36:47 env -- scripts/common.sh@366 -- # decimal 2 00:04:57.997 07:36:47 env -- scripts/common.sh@353 -- # local d=2 00:04:57.997 07:36:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.997 07:36:47 env -- scripts/common.sh@355 -- # echo 2 00:04:57.997 07:36:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.997 07:36:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.997 07:36:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.997 07:36:47 env -- scripts/common.sh@368 -- # return 0 00:04:57.997 07:36:47 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.997 07:36:47 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.997 --rc genhtml_branch_coverage=1 00:04:57.997 --rc genhtml_function_coverage=1 00:04:57.997 --rc genhtml_legend=1 00:04:57.997 --rc geninfo_all_blocks=1 00:04:57.997 --rc geninfo_unexecuted_blocks=1 00:04:57.997 00:04:57.997 ' 00:04:57.997 07:36:47 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.997 --rc genhtml_branch_coverage=1 00:04:57.997 --rc genhtml_function_coverage=1 00:04:57.997 --rc genhtml_legend=1 00:04:57.997 --rc geninfo_all_blocks=1 00:04:57.997 --rc geninfo_unexecuted_blocks=1 00:04:57.997 00:04:57.997 ' 00:04:57.997 07:36:47 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.997 --rc genhtml_branch_coverage=1 00:04:57.997 --rc genhtml_function_coverage=1 00:04:57.997 --rc genhtml_legend=1 00:04:57.997 --rc geninfo_all_blocks=1 00:04:57.997 --rc geninfo_unexecuted_blocks=1 00:04:57.997 00:04:57.997 ' 00:04:57.997 07:36:47 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.997 --rc genhtml_branch_coverage=1 00:04:57.997 --rc genhtml_function_coverage=1 00:04:57.997 --rc genhtml_legend=1 00:04:57.997 --rc geninfo_all_blocks=1 00:04:57.997 --rc geninfo_unexecuted_blocks=1 00:04:57.997 00:04:57.997 ' 00:04:57.997 07:36:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:57.997 07:36:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.997 07:36:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.997 07:36:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.997 ************************************ 00:04:57.997 START TEST env_memory 00:04:57.997 ************************************ 00:04:57.997 07:36:47 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:57.997 00:04:57.997 00:04:57.997 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.997 http://cunit.sourceforge.net/ 00:04:57.997 00:04:57.997 00:04:57.997 Suite: memory 00:04:57.997 Test: alloc and free memory map ...[2024-11-29 07:36:47.860527] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:57.997 passed 00:04:57.997 Test: mem map translation ...[2024-11-29 07:36:47.906911] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:57.997 [2024-11-29 07:36:47.906984] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:57.997 [2024-11-29 07:36:47.907053] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:57.998 [2024-11-29 07:36:47.907091] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:58.269 passed 00:04:58.269 Test: mem map registration ...[2024-11-29 07:36:47.978287] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:58.269 [2024-11-29 07:36:47.978371] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:58.269 passed 00:04:58.269 Test: mem map adjacent registrations ...passed 00:04:58.269 00:04:58.269 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.269 suites 1 1 n/a 0 0 00:04:58.269 tests 4 4 4 0 0 00:04:58.269 asserts 152 152 152 0 n/a 00:04:58.269 00:04:58.269 Elapsed time = 0.244 seconds 00:04:58.269 00:04:58.269 real 0m0.297s 00:04:58.269 user 0m0.256s 00:04:58.269 sys 0m0.030s 00:04:58.269 07:36:48 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.269 07:36:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:58.269 ************************************ 00:04:58.269 END TEST env_memory 00:04:58.269 ************************************ 00:04:58.269 07:36:48 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:58.269 07:36:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.269 07:36:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.269 07:36:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.269 ************************************ 00:04:58.269 START TEST env_vtophys 00:04:58.269 ************************************ 00:04:58.269 07:36:48 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:58.269 EAL: lib.eal log level changed from notice to debug 00:04:58.269 EAL: Detected lcore 0 as core 0 on socket 0 00:04:58.269 EAL: Detected lcore 1 as core 0 on socket 0 00:04:58.269 EAL: Detected lcore 2 as core 0 on socket 0 00:04:58.269 EAL: Detected lcore 3 as core 0 on socket 0 00:04:58.269 EAL: Detected lcore 4 as core 0 on socket 0 00:04:58.269 EAL: Detected lcore 5 as core 0 on socket 0 00:04:58.269 EAL: Detected lcore 6 as core 0 on socket 0 00:04:58.269 EAL: Detected lcore 7 as core 0 on socket 0 00:04:58.269 EAL: Detected lcore 8 as core 0 on socket 0 00:04:58.269 EAL: Detected lcore 9 as core 0 on socket 0 00:04:58.269 EAL: Maximum logical cores by configuration: 128 00:04:58.269 EAL: Detected CPU lcores: 10 00:04:58.269 EAL: Detected NUMA nodes: 1 00:04:58.269 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:58.269 EAL: Detected shared linkage of DPDK 00:04:58.529 EAL: No shared files mode enabled, IPC will be disabled 00:04:58.529 EAL: Selected IOVA mode 'PA' 00:04:58.529 EAL: Probing VFIO support... 00:04:58.529 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:58.529 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:58.529 EAL: Ask a virtual area of 0x2e000 bytes 00:04:58.529 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:58.529 EAL: Setting up physically contiguous memory... 00:04:58.529 EAL: Setting maximum number of open files to 524288 00:04:58.529 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:58.529 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:58.529 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.529 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:58.529 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.529 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.529 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:58.529 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:58.529 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.529 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:58.529 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.529 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.529 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:58.529 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:58.529 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.529 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:58.529 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.529 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.529 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:58.529 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:58.529 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.529 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:58.529 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.529 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.529 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:58.529 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:58.529 EAL: Hugepages will be freed exactly as allocated. 00:04:58.529 EAL: No shared files mode enabled, IPC is disabled 00:04:58.529 EAL: No shared files mode enabled, IPC is disabled 00:04:58.529 EAL: TSC frequency is ~2290000 KHz 00:04:58.529 EAL: Main lcore 0 is ready (tid=7ffa862a9a40;cpuset=[0]) 00:04:58.529 EAL: Trying to obtain current memory policy. 00:04:58.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.529 EAL: Restoring previous memory policy: 0 00:04:58.529 EAL: request: mp_malloc_sync 00:04:58.529 EAL: No shared files mode enabled, IPC is disabled 00:04:58.529 EAL: Heap on socket 0 was expanded by 2MB 00:04:58.529 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:58.529 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:58.529 EAL: Mem event callback 'spdk:(nil)' registered 00:04:58.529 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:58.529 00:04:58.529 00:04:58.529 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.529 http://cunit.sourceforge.net/ 00:04:58.529 00:04:58.529 00:04:58.529 Suite: components_suite 00:04:58.789 Test: vtophys_malloc_test ...passed 00:04:58.789 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:58.789 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.789 EAL: Restoring previous memory policy: 4 00:04:58.789 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.789 EAL: request: mp_malloc_sync 00:04:58.789 EAL: No shared files mode enabled, IPC is disabled 00:04:58.789 EAL: Heap on socket 0 was expanded by 4MB 00:04:58.789 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.789 EAL: request: mp_malloc_sync 00:04:58.789 EAL: No shared files mode enabled, IPC is disabled 00:04:58.790 EAL: Heap on socket 0 was shrunk by 4MB 00:04:59.049 EAL: Trying to obtain current memory policy. 00:04:59.049 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.049 EAL: Restoring previous memory policy: 4 00:04:59.049 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.049 EAL: request: mp_malloc_sync 00:04:59.049 EAL: No shared files mode enabled, IPC is disabled 00:04:59.049 EAL: Heap on socket 0 was expanded by 6MB 00:04:59.049 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.049 EAL: request: mp_malloc_sync 00:04:59.049 EAL: No shared files mode enabled, IPC is disabled 00:04:59.049 EAL: Heap on socket 0 was shrunk by 6MB 00:04:59.049 EAL: Trying to obtain current memory policy. 00:04:59.049 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.049 EAL: Restoring previous memory policy: 4 00:04:59.049 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.049 EAL: request: mp_malloc_sync 00:04:59.050 EAL: No shared files mode enabled, IPC is disabled 00:04:59.050 EAL: Heap on socket 0 was expanded by 10MB 00:04:59.050 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.050 EAL: request: mp_malloc_sync 00:04:59.050 EAL: No shared files mode enabled, IPC is disabled 00:04:59.050 EAL: Heap on socket 0 was shrunk by 10MB 00:04:59.050 EAL: Trying to obtain current memory policy. 00:04:59.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.050 EAL: Restoring previous memory policy: 4 00:04:59.050 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.050 EAL: request: mp_malloc_sync 00:04:59.050 EAL: No shared files mode enabled, IPC is disabled 00:04:59.050 EAL: Heap on socket 0 was expanded by 18MB 00:04:59.050 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.050 EAL: request: mp_malloc_sync 00:04:59.050 EAL: No shared files mode enabled, IPC is disabled 00:04:59.050 EAL: Heap on socket 0 was shrunk by 18MB 00:04:59.050 EAL: Trying to obtain current memory policy. 00:04:59.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.050 EAL: Restoring previous memory policy: 4 00:04:59.050 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.050 EAL: request: mp_malloc_sync 00:04:59.050 EAL: No shared files mode enabled, IPC is disabled 00:04:59.050 EAL: Heap on socket 0 was expanded by 34MB 00:04:59.050 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.050 EAL: request: mp_malloc_sync 00:04:59.050 EAL: No shared files mode enabled, IPC is disabled 00:04:59.050 EAL: Heap on socket 0 was shrunk by 34MB 00:04:59.050 EAL: Trying to obtain current memory policy. 00:04:59.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.050 EAL: Restoring previous memory policy: 4 00:04:59.050 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.050 EAL: request: mp_malloc_sync 00:04:59.050 EAL: No shared files mode enabled, IPC is disabled 00:04:59.050 EAL: Heap on socket 0 was expanded by 66MB 00:04:59.309 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.309 EAL: request: mp_malloc_sync 00:04:59.309 EAL: No shared files mode enabled, IPC is disabled 00:04:59.309 EAL: Heap on socket 0 was shrunk by 66MB 00:04:59.309 EAL: Trying to obtain current memory policy. 00:04:59.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.309 EAL: Restoring previous memory policy: 4 00:04:59.309 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.309 EAL: request: mp_malloc_sync 00:04:59.309 EAL: No shared files mode enabled, IPC is disabled 00:04:59.309 EAL: Heap on socket 0 was expanded by 130MB 00:04:59.569 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.569 EAL: request: mp_malloc_sync 00:04:59.569 EAL: No shared files mode enabled, IPC is disabled 00:04:59.569 EAL: Heap on socket 0 was shrunk by 130MB 00:04:59.829 EAL: Trying to obtain current memory policy. 00:04:59.829 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.829 EAL: Restoring previous memory policy: 4 00:04:59.829 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.829 EAL: request: mp_malloc_sync 00:04:59.829 EAL: No shared files mode enabled, IPC is disabled 00:04:59.829 EAL: Heap on socket 0 was expanded by 258MB 00:05:00.398 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.398 EAL: request: mp_malloc_sync 00:05:00.398 EAL: No shared files mode enabled, IPC is disabled 00:05:00.398 EAL: Heap on socket 0 was shrunk by 258MB 00:05:00.966 EAL: Trying to obtain current memory policy. 00:05:00.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.966 EAL: Restoring previous memory policy: 4 00:05:00.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.966 EAL: request: mp_malloc_sync 00:05:00.966 EAL: No shared files mode enabled, IPC is disabled 00:05:00.966 EAL: Heap on socket 0 was expanded by 514MB 00:05:01.907 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.907 EAL: request: mp_malloc_sync 00:05:01.907 EAL: No shared files mode enabled, IPC is disabled 00:05:01.907 EAL: Heap on socket 0 was shrunk by 514MB 00:05:02.846 EAL: Trying to obtain current memory policy. 00:05:02.846 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.847 EAL: Restoring previous memory policy: 4 00:05:02.847 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.847 EAL: request: mp_malloc_sync 00:05:02.847 EAL: No shared files mode enabled, IPC is disabled 00:05:02.847 EAL: Heap on socket 0 was expanded by 1026MB 00:05:04.757 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.757 EAL: request: mp_malloc_sync 00:05:04.757 EAL: No shared files mode enabled, IPC is disabled 00:05:04.757 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:06.667 passed 00:05:06.667 00:05:06.667 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.667 suites 1 1 n/a 0 0 00:05:06.667 tests 2 2 2 0 0 00:05:06.667 asserts 5761 5761 5761 0 n/a 00:05:06.667 00:05:06.667 Elapsed time = 7.848 seconds 00:05:06.667 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.667 EAL: request: mp_malloc_sync 00:05:06.667 EAL: No shared files mode enabled, IPC is disabled 00:05:06.667 EAL: Heap on socket 0 was shrunk by 2MB 00:05:06.667 EAL: No shared files mode enabled, IPC is disabled 00:05:06.667 EAL: No shared files mode enabled, IPC is disabled 00:05:06.667 EAL: No shared files mode enabled, IPC is disabled 00:05:06.667 00:05:06.667 real 0m8.169s 00:05:06.667 user 0m7.237s 00:05:06.667 sys 0m0.778s 00:05:06.667 07:36:56 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.667 07:36:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:06.667 ************************************ 00:05:06.667 END TEST env_vtophys 00:05:06.667 ************************************ 00:05:06.667 07:36:56 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:06.667 07:36:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.667 07:36:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.667 07:36:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.667 ************************************ 00:05:06.667 START TEST env_pci 00:05:06.667 ************************************ 00:05:06.667 07:36:56 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:06.667 00:05:06.667 00:05:06.667 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.667 http://cunit.sourceforge.net/ 00:05:06.667 00:05:06.667 00:05:06.667 Suite: pci 00:05:06.667 Test: pci_hook ...[2024-11-29 07:36:56.423796] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56713 has claimed it 00:05:06.667 passed 00:05:06.667 00:05:06.667 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.667 suites 1 1 n/a 0 0 00:05:06.667 tests 1 1 1 0 0 00:05:06.667 asserts 25 25 25 0 n/a 00:05:06.667 00:05:06.667 Elapsed time = 0.006 seconds 00:05:06.667 EAL: Cannot find device (10000:00:01.0) 00:05:06.667 EAL: Failed to attach device on primary process 00:05:06.667 00:05:06.667 real 0m0.106s 00:05:06.667 user 0m0.049s 00:05:06.667 sys 0m0.056s 00:05:06.667 07:36:56 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.668 07:36:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:06.668 ************************************ 00:05:06.668 END TEST env_pci 00:05:06.668 ************************************ 00:05:06.668 07:36:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:06.668 07:36:56 env -- env/env.sh@15 -- # uname 00:05:06.668 07:36:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:06.668 07:36:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:06.668 07:36:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.668 07:36:56 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:06.668 07:36:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.668 07:36:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.668 ************************************ 00:05:06.668 START TEST env_dpdk_post_init 00:05:06.668 ************************************ 00:05:06.668 07:36:56 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.928 EAL: Detected CPU lcores: 10 00:05:06.928 EAL: Detected NUMA nodes: 1 00:05:06.928 EAL: Detected shared linkage of DPDK 00:05:06.928 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.928 EAL: Selected IOVA mode 'PA' 00:05:06.928 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.928 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:06.928 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:06.928 Starting DPDK initialization... 00:05:06.928 Starting SPDK post initialization... 00:05:06.928 SPDK NVMe probe 00:05:06.928 Attaching to 0000:00:10.0 00:05:06.928 Attaching to 0000:00:11.0 00:05:06.928 Attached to 0000:00:10.0 00:05:06.928 Attached to 0000:00:11.0 00:05:06.928 Cleaning up... 00:05:06.928 00:05:06.928 real 0m0.276s 00:05:06.928 user 0m0.091s 00:05:06.928 sys 0m0.086s 00:05:06.928 07:36:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.928 07:36:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.928 ************************************ 00:05:06.928 END TEST env_dpdk_post_init 00:05:06.928 ************************************ 00:05:07.188 07:36:56 env -- env/env.sh@26 -- # uname 00:05:07.188 07:36:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:07.188 07:36:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:07.188 07:36:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.188 07:36:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.188 07:36:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 ************************************ 00:05:07.188 START TEST env_mem_callbacks 00:05:07.188 ************************************ 00:05:07.188 07:36:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:07.188 EAL: Detected CPU lcores: 10 00:05:07.188 EAL: Detected NUMA nodes: 1 00:05:07.188 EAL: Detected shared linkage of DPDK 00:05:07.188 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.188 EAL: Selected IOVA mode 'PA' 00:05:07.188 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.188 00:05:07.188 00:05:07.188 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.188 http://cunit.sourceforge.net/ 00:05:07.188 00:05:07.188 00:05:07.188 Suite: memory 00:05:07.188 Test: test ... 00:05:07.188 register 0x200000200000 2097152 00:05:07.188 malloc 3145728 00:05:07.188 register 0x200000400000 4194304 00:05:07.188 buf 0x2000004fffc0 len 3145728 PASSED 00:05:07.188 malloc 64 00:05:07.188 buf 0x2000004ffec0 len 64 PASSED 00:05:07.188 malloc 4194304 00:05:07.188 register 0x200000800000 6291456 00:05:07.188 buf 0x2000009fffc0 len 4194304 PASSED 00:05:07.188 free 0x2000004fffc0 3145728 00:05:07.188 free 0x2000004ffec0 64 00:05:07.189 unregister 0x200000400000 4194304 PASSED 00:05:07.189 free 0x2000009fffc0 4194304 00:05:07.189 unregister 0x200000800000 6291456 PASSED 00:05:07.449 malloc 8388608 00:05:07.449 register 0x200000400000 10485760 00:05:07.449 buf 0x2000005fffc0 len 8388608 PASSED 00:05:07.449 free 0x2000005fffc0 8388608 00:05:07.449 unregister 0x200000400000 10485760 PASSED 00:05:07.449 passed 00:05:07.449 00:05:07.449 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.449 suites 1 1 n/a 0 0 00:05:07.449 tests 1 1 1 0 0 00:05:07.449 asserts 15 15 15 0 n/a 00:05:07.449 00:05:07.449 Elapsed time = 0.081 seconds 00:05:07.449 00:05:07.449 real 0m0.275s 00:05:07.449 user 0m0.108s 00:05:07.449 sys 0m0.065s 00:05:07.449 07:36:57 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.449 07:36:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:07.449 ************************************ 00:05:07.449 END TEST env_mem_callbacks 00:05:07.449 ************************************ 00:05:07.449 00:05:07.449 real 0m9.681s 00:05:07.449 user 0m7.972s 00:05:07.449 sys 0m1.354s 00:05:07.449 07:36:57 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.449 07:36:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.449 ************************************ 00:05:07.449 END TEST env 00:05:07.449 ************************************ 00:05:07.449 07:36:57 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:07.449 07:36:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.449 07:36:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.449 07:36:57 -- common/autotest_common.sh@10 -- # set +x 00:05:07.449 ************************************ 00:05:07.449 START TEST rpc 00:05:07.449 ************************************ 00:05:07.449 07:36:57 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:07.709 * Looking for test storage... 00:05:07.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:07.709 07:36:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.709 07:36:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.709 07:36:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.709 07:36:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.709 07:36:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.709 07:36:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.709 07:36:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.709 07:36:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.709 07:36:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.709 07:36:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.709 07:36:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.709 07:36:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:07.709 07:36:57 rpc -- scripts/common.sh@345 -- # : 1 00:05:07.709 07:36:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.709 07:36:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.709 07:36:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:07.709 07:36:57 rpc -- scripts/common.sh@353 -- # local d=1 00:05:07.709 07:36:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.709 07:36:57 rpc -- scripts/common.sh@355 -- # echo 1 00:05:07.709 07:36:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.709 07:36:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:07.709 07:36:57 rpc -- scripts/common.sh@353 -- # local d=2 00:05:07.709 07:36:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.709 07:36:57 rpc -- scripts/common.sh@355 -- # echo 2 00:05:07.709 07:36:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.709 07:36:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.709 07:36:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.709 07:36:57 rpc -- scripts/common.sh@368 -- # return 0 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:07.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.709 --rc genhtml_branch_coverage=1 00:05:07.709 --rc genhtml_function_coverage=1 00:05:07.709 --rc genhtml_legend=1 00:05:07.709 --rc geninfo_all_blocks=1 00:05:07.709 --rc geninfo_unexecuted_blocks=1 00:05:07.709 00:05:07.709 ' 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:07.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.709 --rc genhtml_branch_coverage=1 00:05:07.709 --rc genhtml_function_coverage=1 00:05:07.709 --rc genhtml_legend=1 00:05:07.709 --rc geninfo_all_blocks=1 00:05:07.709 --rc geninfo_unexecuted_blocks=1 00:05:07.709 00:05:07.709 ' 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:07.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.709 --rc genhtml_branch_coverage=1 00:05:07.709 --rc genhtml_function_coverage=1 00:05:07.709 --rc genhtml_legend=1 00:05:07.709 --rc geninfo_all_blocks=1 00:05:07.709 --rc geninfo_unexecuted_blocks=1 00:05:07.709 00:05:07.709 ' 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:07.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.709 --rc genhtml_branch_coverage=1 00:05:07.709 --rc genhtml_function_coverage=1 00:05:07.709 --rc genhtml_legend=1 00:05:07.709 --rc geninfo_all_blocks=1 00:05:07.709 --rc geninfo_unexecuted_blocks=1 00:05:07.709 00:05:07.709 ' 00:05:07.709 07:36:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56840 00:05:07.709 07:36:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:07.709 07:36:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.709 07:36:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56840 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@835 -- # '[' -z 56840 ']' 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.709 07:36:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.710 [2024-11-29 07:36:57.620433] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:07.710 [2024-11-29 07:36:57.620551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56840 ] 00:05:07.969 [2024-11-29 07:36:57.796129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.969 [2024-11-29 07:36:57.901003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:07.969 [2024-11-29 07:36:57.901056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56840' to capture a snapshot of events at runtime. 00:05:07.969 [2024-11-29 07:36:57.901065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:07.969 [2024-11-29 07:36:57.901074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:07.969 [2024-11-29 07:36:57.901081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56840 for offline analysis/debug. 00:05:07.969 [2024-11-29 07:36:57.902271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.908 07:36:58 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.908 07:36:58 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:08.908 07:36:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.908 07:36:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.908 07:36:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:08.908 07:36:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:08.908 07:36:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.908 07:36:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.908 07:36:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.908 ************************************ 00:05:08.908 START TEST rpc_integrity 00:05:08.908 ************************************ 00:05:08.908 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:08.908 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:08.908 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.908 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.908 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.908 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:08.908 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:08.908 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:08.908 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.908 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.908 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.908 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.908 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:08.908 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:08.908 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.908 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.168 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.168 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.168 { 00:05:09.168 "name": "Malloc0", 00:05:09.168 "aliases": [ 00:05:09.168 "95b3dc86-1823-4624-bd7a-dbc850034acf" 00:05:09.168 ], 00:05:09.168 "product_name": "Malloc disk", 00:05:09.168 "block_size": 512, 00:05:09.168 "num_blocks": 16384, 00:05:09.168 "uuid": "95b3dc86-1823-4624-bd7a-dbc850034acf", 00:05:09.168 "assigned_rate_limits": { 00:05:09.168 "rw_ios_per_sec": 0, 00:05:09.168 "rw_mbytes_per_sec": 0, 00:05:09.168 "r_mbytes_per_sec": 0, 00:05:09.168 "w_mbytes_per_sec": 0 00:05:09.168 }, 00:05:09.168 "claimed": false, 00:05:09.168 "zoned": false, 00:05:09.168 "supported_io_types": { 00:05:09.168 "read": true, 00:05:09.168 "write": true, 00:05:09.168 "unmap": true, 00:05:09.168 "flush": true, 00:05:09.168 "reset": true, 00:05:09.168 "nvme_admin": false, 00:05:09.168 "nvme_io": false, 00:05:09.168 "nvme_io_md": false, 00:05:09.168 "write_zeroes": true, 00:05:09.168 "zcopy": true, 00:05:09.168 "get_zone_info": false, 00:05:09.168 "zone_management": false, 00:05:09.168 "zone_append": false, 00:05:09.168 "compare": false, 00:05:09.168 "compare_and_write": false, 00:05:09.168 "abort": true, 00:05:09.168 "seek_hole": false, 00:05:09.168 "seek_data": false, 00:05:09.168 "copy": true, 00:05:09.168 "nvme_iov_md": false 00:05:09.168 }, 00:05:09.168 "memory_domains": [ 00:05:09.168 { 00:05:09.168 "dma_device_id": "system", 00:05:09.168 "dma_device_type": 1 00:05:09.168 }, 00:05:09.168 { 00:05:09.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.168 "dma_device_type": 2 00:05:09.168 } 00:05:09.168 ], 00:05:09.168 "driver_specific": {} 00:05:09.168 } 00:05:09.168 ]' 00:05:09.168 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.168 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.168 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:09.168 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.168 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.168 [2024-11-29 07:36:58.902344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:09.168 [2024-11-29 07:36:58.902395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.168 [2024-11-29 07:36:58.902418] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:09.168 [2024-11-29 07:36:58.902436] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.168 [2024-11-29 07:36:58.904684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.168 [2024-11-29 07:36:58.904721] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.168 Passthru0 00:05:09.168 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.168 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.168 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.168 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.168 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.168 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.168 { 00:05:09.168 "name": "Malloc0", 00:05:09.168 "aliases": [ 00:05:09.168 "95b3dc86-1823-4624-bd7a-dbc850034acf" 00:05:09.168 ], 00:05:09.168 "product_name": "Malloc disk", 00:05:09.168 "block_size": 512, 00:05:09.168 "num_blocks": 16384, 00:05:09.168 "uuid": "95b3dc86-1823-4624-bd7a-dbc850034acf", 00:05:09.168 "assigned_rate_limits": { 00:05:09.168 "rw_ios_per_sec": 0, 00:05:09.168 "rw_mbytes_per_sec": 0, 00:05:09.168 "r_mbytes_per_sec": 0, 00:05:09.168 "w_mbytes_per_sec": 0 00:05:09.168 }, 00:05:09.168 "claimed": true, 00:05:09.168 "claim_type": "exclusive_write", 00:05:09.168 "zoned": false, 00:05:09.168 "supported_io_types": { 00:05:09.168 "read": true, 00:05:09.168 "write": true, 00:05:09.168 "unmap": true, 00:05:09.168 "flush": true, 00:05:09.168 "reset": true, 00:05:09.168 "nvme_admin": false, 00:05:09.168 "nvme_io": false, 00:05:09.168 "nvme_io_md": false, 00:05:09.168 "write_zeroes": true, 00:05:09.168 "zcopy": true, 00:05:09.168 "get_zone_info": false, 00:05:09.168 "zone_management": false, 00:05:09.168 "zone_append": false, 00:05:09.168 "compare": false, 00:05:09.168 "compare_and_write": false, 00:05:09.168 "abort": true, 00:05:09.168 "seek_hole": false, 00:05:09.168 "seek_data": false, 00:05:09.168 "copy": true, 00:05:09.168 "nvme_iov_md": false 00:05:09.168 }, 00:05:09.168 "memory_domains": [ 00:05:09.168 { 00:05:09.168 "dma_device_id": "system", 00:05:09.168 "dma_device_type": 1 00:05:09.168 }, 00:05:09.168 { 00:05:09.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.168 "dma_device_type": 2 00:05:09.168 } 00:05:09.168 ], 00:05:09.168 "driver_specific": {} 00:05:09.168 }, 00:05:09.168 { 00:05:09.168 "name": "Passthru0", 00:05:09.168 "aliases": [ 00:05:09.168 "b5c7748b-2ea3-5a21-82ba-45d0bde009e2" 00:05:09.168 ], 00:05:09.168 "product_name": "passthru", 00:05:09.168 "block_size": 512, 00:05:09.168 "num_blocks": 16384, 00:05:09.168 "uuid": "b5c7748b-2ea3-5a21-82ba-45d0bde009e2", 00:05:09.168 "assigned_rate_limits": { 00:05:09.168 "rw_ios_per_sec": 0, 00:05:09.168 "rw_mbytes_per_sec": 0, 00:05:09.168 "r_mbytes_per_sec": 0, 00:05:09.168 "w_mbytes_per_sec": 0 00:05:09.168 }, 00:05:09.168 "claimed": false, 00:05:09.168 "zoned": false, 00:05:09.168 "supported_io_types": { 00:05:09.168 "read": true, 00:05:09.168 "write": true, 00:05:09.168 "unmap": true, 00:05:09.168 "flush": true, 00:05:09.168 "reset": true, 00:05:09.169 "nvme_admin": false, 00:05:09.169 "nvme_io": false, 00:05:09.169 "nvme_io_md": false, 00:05:09.169 "write_zeroes": true, 00:05:09.169 "zcopy": true, 00:05:09.169 "get_zone_info": false, 00:05:09.169 "zone_management": false, 00:05:09.169 "zone_append": false, 00:05:09.169 "compare": false, 00:05:09.169 "compare_and_write": false, 00:05:09.169 "abort": true, 00:05:09.169 "seek_hole": false, 00:05:09.169 "seek_data": false, 00:05:09.169 "copy": true, 00:05:09.169 "nvme_iov_md": false 00:05:09.169 }, 00:05:09.169 "memory_domains": [ 00:05:09.169 { 00:05:09.169 "dma_device_id": "system", 00:05:09.169 "dma_device_type": 1 00:05:09.169 }, 00:05:09.169 { 00:05:09.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.169 "dma_device_type": 2 00:05:09.169 } 00:05:09.169 ], 00:05:09.169 "driver_specific": { 00:05:09.169 "passthru": { 00:05:09.169 "name": "Passthru0", 00:05:09.169 "base_bdev_name": "Malloc0" 00:05:09.169 } 00:05:09.169 } 00:05:09.169 } 00:05:09.169 ]' 00:05:09.169 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:09.169 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.169 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.169 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.169 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.169 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.169 07:36:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:09.169 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.169 07:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.169 07:36:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.169 07:36:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.169 07:36:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.169 07:36:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.169 07:36:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.169 07:36:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.169 07:36:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:09.169 07:36:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.169 00:05:09.169 real 0m0.346s 00:05:09.169 user 0m0.197s 00:05:09.169 sys 0m0.054s 00:05:09.169 07:36:59 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.169 07:36:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.169 ************************************ 00:05:09.169 END TEST rpc_integrity 00:05:09.169 ************************************ 00:05:09.429 07:36:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:09.429 07:36:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.429 07:36:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.429 07:36:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.429 ************************************ 00:05:09.429 START TEST rpc_plugins 00:05:09.429 ************************************ 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:09.429 07:36:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.429 07:36:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:09.429 07:36:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.429 07:36:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:09.429 { 00:05:09.429 "name": "Malloc1", 00:05:09.429 "aliases": [ 00:05:09.429 "aaceefa1-acc8-454f-8480-55bccfc0ace0" 00:05:09.429 ], 00:05:09.429 "product_name": "Malloc disk", 00:05:09.429 "block_size": 4096, 00:05:09.429 "num_blocks": 256, 00:05:09.429 "uuid": "aaceefa1-acc8-454f-8480-55bccfc0ace0", 00:05:09.429 "assigned_rate_limits": { 00:05:09.429 "rw_ios_per_sec": 0, 00:05:09.429 "rw_mbytes_per_sec": 0, 00:05:09.429 "r_mbytes_per_sec": 0, 00:05:09.429 "w_mbytes_per_sec": 0 00:05:09.429 }, 00:05:09.429 "claimed": false, 00:05:09.429 "zoned": false, 00:05:09.429 "supported_io_types": { 00:05:09.429 "read": true, 00:05:09.429 "write": true, 00:05:09.429 "unmap": true, 00:05:09.429 "flush": true, 00:05:09.429 "reset": true, 00:05:09.429 "nvme_admin": false, 00:05:09.429 "nvme_io": false, 00:05:09.429 "nvme_io_md": false, 00:05:09.429 "write_zeroes": true, 00:05:09.429 "zcopy": true, 00:05:09.429 "get_zone_info": false, 00:05:09.429 "zone_management": false, 00:05:09.429 "zone_append": false, 00:05:09.429 "compare": false, 00:05:09.429 "compare_and_write": false, 00:05:09.429 "abort": true, 00:05:09.429 "seek_hole": false, 00:05:09.429 "seek_data": false, 00:05:09.429 "copy": true, 00:05:09.429 "nvme_iov_md": false 00:05:09.429 }, 00:05:09.429 "memory_domains": [ 00:05:09.429 { 00:05:09.429 "dma_device_id": "system", 00:05:09.429 "dma_device_type": 1 00:05:09.429 }, 00:05:09.429 { 00:05:09.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.429 "dma_device_type": 2 00:05:09.429 } 00:05:09.429 ], 00:05:09.429 "driver_specific": {} 00:05:09.429 } 00:05:09.429 ]' 00:05:09.429 07:36:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:09.429 07:36:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:09.429 07:36:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.429 07:36:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.429 07:36:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:09.429 07:36:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:09.429 07:36:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:09.429 00:05:09.429 real 0m0.161s 00:05:09.429 user 0m0.097s 00:05:09.429 sys 0m0.021s 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.429 07:36:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.429 ************************************ 00:05:09.429 END TEST rpc_plugins 00:05:09.429 ************************************ 00:05:09.429 07:36:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:09.429 07:36:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.429 07:36:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.429 07:36:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.429 ************************************ 00:05:09.429 START TEST rpc_trace_cmd_test 00:05:09.429 ************************************ 00:05:09.429 07:36:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:09.429 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:09.429 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:09.429 07:36:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.429 07:36:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:09.690 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56840", 00:05:09.690 "tpoint_group_mask": "0x8", 00:05:09.690 "iscsi_conn": { 00:05:09.690 "mask": "0x2", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "scsi": { 00:05:09.690 "mask": "0x4", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "bdev": { 00:05:09.690 "mask": "0x8", 00:05:09.690 "tpoint_mask": "0xffffffffffffffff" 00:05:09.690 }, 00:05:09.690 "nvmf_rdma": { 00:05:09.690 "mask": "0x10", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "nvmf_tcp": { 00:05:09.690 "mask": "0x20", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "ftl": { 00:05:09.690 "mask": "0x40", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "blobfs": { 00:05:09.690 "mask": "0x80", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "dsa": { 00:05:09.690 "mask": "0x200", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "thread": { 00:05:09.690 "mask": "0x400", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "nvme_pcie": { 00:05:09.690 "mask": "0x800", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "iaa": { 00:05:09.690 "mask": "0x1000", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "nvme_tcp": { 00:05:09.690 "mask": "0x2000", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "bdev_nvme": { 00:05:09.690 "mask": "0x4000", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "sock": { 00:05:09.690 "mask": "0x8000", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "blob": { 00:05:09.690 "mask": "0x10000", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "bdev_raid": { 00:05:09.690 "mask": "0x20000", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 }, 00:05:09.690 "scheduler": { 00:05:09.690 "mask": "0x40000", 00:05:09.690 "tpoint_mask": "0x0" 00:05:09.690 } 00:05:09.690 }' 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:09.690 00:05:09.690 real 0m0.239s 00:05:09.690 user 0m0.188s 00:05:09.690 sys 0m0.040s 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.690 07:36:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:09.690 ************************************ 00:05:09.690 END TEST rpc_trace_cmd_test 00:05:09.690 ************************************ 00:05:09.950 07:36:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:09.950 07:36:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:09.950 07:36:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:09.950 07:36:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.950 07:36:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.950 07:36:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.950 ************************************ 00:05:09.950 START TEST rpc_daemon_integrity 00:05:09.950 ************************************ 00:05:09.950 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:09.950 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.950 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.950 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.950 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.950 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.950 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:09.950 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.950 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.951 { 00:05:09.951 "name": "Malloc2", 00:05:09.951 "aliases": [ 00:05:09.951 "35157a2e-938c-4791-b9a9-e1c1304e418d" 00:05:09.951 ], 00:05:09.951 "product_name": "Malloc disk", 00:05:09.951 "block_size": 512, 00:05:09.951 "num_blocks": 16384, 00:05:09.951 "uuid": "35157a2e-938c-4791-b9a9-e1c1304e418d", 00:05:09.951 "assigned_rate_limits": { 00:05:09.951 "rw_ios_per_sec": 0, 00:05:09.951 "rw_mbytes_per_sec": 0, 00:05:09.951 "r_mbytes_per_sec": 0, 00:05:09.951 "w_mbytes_per_sec": 0 00:05:09.951 }, 00:05:09.951 "claimed": false, 00:05:09.951 "zoned": false, 00:05:09.951 "supported_io_types": { 00:05:09.951 "read": true, 00:05:09.951 "write": true, 00:05:09.951 "unmap": true, 00:05:09.951 "flush": true, 00:05:09.951 "reset": true, 00:05:09.951 "nvme_admin": false, 00:05:09.951 "nvme_io": false, 00:05:09.951 "nvme_io_md": false, 00:05:09.951 "write_zeroes": true, 00:05:09.951 "zcopy": true, 00:05:09.951 "get_zone_info": false, 00:05:09.951 "zone_management": false, 00:05:09.951 "zone_append": false, 00:05:09.951 "compare": false, 00:05:09.951 "compare_and_write": false, 00:05:09.951 "abort": true, 00:05:09.951 "seek_hole": false, 00:05:09.951 "seek_data": false, 00:05:09.951 "copy": true, 00:05:09.951 "nvme_iov_md": false 00:05:09.951 }, 00:05:09.951 "memory_domains": [ 00:05:09.951 { 00:05:09.951 "dma_device_id": "system", 00:05:09.951 "dma_device_type": 1 00:05:09.951 }, 00:05:09.951 { 00:05:09.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.951 "dma_device_type": 2 00:05:09.951 } 00:05:09.951 ], 00:05:09.951 "driver_specific": {} 00:05:09.951 } 00:05:09.951 ]' 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.951 [2024-11-29 07:36:59.831213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:09.951 [2024-11-29 07:36:59.831260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.951 [2024-11-29 07:36:59.831295] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:09.951 [2024-11-29 07:36:59.831305] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.951 [2024-11-29 07:36:59.833526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.951 [2024-11-29 07:36:59.833558] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.951 Passthru0 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.951 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.951 { 00:05:09.951 "name": "Malloc2", 00:05:09.951 "aliases": [ 00:05:09.951 "35157a2e-938c-4791-b9a9-e1c1304e418d" 00:05:09.951 ], 00:05:09.951 "product_name": "Malloc disk", 00:05:09.951 "block_size": 512, 00:05:09.951 "num_blocks": 16384, 00:05:09.951 "uuid": "35157a2e-938c-4791-b9a9-e1c1304e418d", 00:05:09.951 "assigned_rate_limits": { 00:05:09.951 "rw_ios_per_sec": 0, 00:05:09.951 "rw_mbytes_per_sec": 0, 00:05:09.951 "r_mbytes_per_sec": 0, 00:05:09.951 "w_mbytes_per_sec": 0 00:05:09.951 }, 00:05:09.951 "claimed": true, 00:05:09.951 "claim_type": "exclusive_write", 00:05:09.951 "zoned": false, 00:05:09.951 "supported_io_types": { 00:05:09.951 "read": true, 00:05:09.951 "write": true, 00:05:09.951 "unmap": true, 00:05:09.951 "flush": true, 00:05:09.951 "reset": true, 00:05:09.951 "nvme_admin": false, 00:05:09.951 "nvme_io": false, 00:05:09.951 "nvme_io_md": false, 00:05:09.951 "write_zeroes": true, 00:05:09.951 "zcopy": true, 00:05:09.951 "get_zone_info": false, 00:05:09.951 "zone_management": false, 00:05:09.951 "zone_append": false, 00:05:09.951 "compare": false, 00:05:09.951 "compare_and_write": false, 00:05:09.951 "abort": true, 00:05:09.951 "seek_hole": false, 00:05:09.951 "seek_data": false, 00:05:09.951 "copy": true, 00:05:09.951 "nvme_iov_md": false 00:05:09.951 }, 00:05:09.951 "memory_domains": [ 00:05:09.951 { 00:05:09.951 "dma_device_id": "system", 00:05:09.951 "dma_device_type": 1 00:05:09.951 }, 00:05:09.951 { 00:05:09.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.951 "dma_device_type": 2 00:05:09.951 } 00:05:09.951 ], 00:05:09.951 "driver_specific": {} 00:05:09.951 }, 00:05:09.951 { 00:05:09.951 "name": "Passthru0", 00:05:09.951 "aliases": [ 00:05:09.951 "db1495a4-7940-51c7-ab4a-c4e7f750eb64" 00:05:09.951 ], 00:05:09.951 "product_name": "passthru", 00:05:09.951 "block_size": 512, 00:05:09.951 "num_blocks": 16384, 00:05:09.951 "uuid": "db1495a4-7940-51c7-ab4a-c4e7f750eb64", 00:05:09.951 "assigned_rate_limits": { 00:05:09.951 "rw_ios_per_sec": 0, 00:05:09.951 "rw_mbytes_per_sec": 0, 00:05:09.951 "r_mbytes_per_sec": 0, 00:05:09.951 "w_mbytes_per_sec": 0 00:05:09.951 }, 00:05:09.951 "claimed": false, 00:05:09.951 "zoned": false, 00:05:09.951 "supported_io_types": { 00:05:09.951 "read": true, 00:05:09.951 "write": true, 00:05:09.951 "unmap": true, 00:05:09.951 "flush": true, 00:05:09.951 "reset": true, 00:05:09.951 "nvme_admin": false, 00:05:09.951 "nvme_io": false, 00:05:09.951 "nvme_io_md": false, 00:05:09.951 "write_zeroes": true, 00:05:09.951 "zcopy": true, 00:05:09.951 "get_zone_info": false, 00:05:09.951 "zone_management": false, 00:05:09.951 "zone_append": false, 00:05:09.951 "compare": false, 00:05:09.951 "compare_and_write": false, 00:05:09.951 "abort": true, 00:05:09.951 "seek_hole": false, 00:05:09.952 "seek_data": false, 00:05:09.952 "copy": true, 00:05:09.952 "nvme_iov_md": false 00:05:09.952 }, 00:05:09.952 "memory_domains": [ 00:05:09.952 { 00:05:09.952 "dma_device_id": "system", 00:05:09.952 "dma_device_type": 1 00:05:09.952 }, 00:05:09.952 { 00:05:09.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.952 "dma_device_type": 2 00:05:09.952 } 00:05:09.952 ], 00:05:09.952 "driver_specific": { 00:05:09.952 "passthru": { 00:05:09.952 "name": "Passthru0", 00:05:09.952 "base_bdev_name": "Malloc2" 00:05:09.952 } 00:05:09.952 } 00:05:09.952 } 00:05:09.952 ]' 00:05:09.952 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.212 07:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.212 07:37:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.212 00:05:10.212 real 0m0.355s 00:05:10.212 user 0m0.194s 00:05:10.212 sys 0m0.062s 00:05:10.212 07:37:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.212 07:37:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.212 ************************************ 00:05:10.212 END TEST rpc_daemon_integrity 00:05:10.212 ************************************ 00:05:10.212 07:37:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:10.212 07:37:00 rpc -- rpc/rpc.sh@84 -- # killprocess 56840 00:05:10.212 07:37:00 rpc -- common/autotest_common.sh@954 -- # '[' -z 56840 ']' 00:05:10.212 07:37:00 rpc -- common/autotest_common.sh@958 -- # kill -0 56840 00:05:10.212 07:37:00 rpc -- common/autotest_common.sh@959 -- # uname 00:05:10.212 07:37:00 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.212 07:37:00 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56840 00:05:10.212 07:37:00 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.212 07:37:00 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.212 killing process with pid 56840 00:05:10.212 07:37:00 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56840' 00:05:10.212 07:37:00 rpc -- common/autotest_common.sh@973 -- # kill 56840 00:05:10.212 07:37:00 rpc -- common/autotest_common.sh@978 -- # wait 56840 00:05:12.752 00:05:12.752 real 0m5.097s 00:05:12.752 user 0m5.615s 00:05:12.752 sys 0m0.914s 00:05:12.752 07:37:02 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.752 07:37:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.752 ************************************ 00:05:12.752 END TEST rpc 00:05:12.752 ************************************ 00:05:12.752 07:37:02 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:12.752 07:37:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.752 07:37:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.752 07:37:02 -- common/autotest_common.sh@10 -- # set +x 00:05:12.752 ************************************ 00:05:12.752 START TEST skip_rpc 00:05:12.752 ************************************ 00:05:12.752 07:37:02 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:12.752 * Looking for test storage... 00:05:12.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.752 07:37:02 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.752 07:37:02 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.752 07:37:02 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.752 07:37:02 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.752 07:37:02 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:12.752 07:37:02 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.752 07:37:02 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.752 --rc genhtml_branch_coverage=1 00:05:12.752 --rc genhtml_function_coverage=1 00:05:12.752 --rc genhtml_legend=1 00:05:12.752 --rc geninfo_all_blocks=1 00:05:12.752 --rc geninfo_unexecuted_blocks=1 00:05:12.752 00:05:12.752 ' 00:05:12.752 07:37:02 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.752 --rc genhtml_branch_coverage=1 00:05:12.752 --rc genhtml_function_coverage=1 00:05:12.752 --rc genhtml_legend=1 00:05:12.752 --rc geninfo_all_blocks=1 00:05:12.752 --rc geninfo_unexecuted_blocks=1 00:05:12.752 00:05:12.752 ' 00:05:12.753 07:37:02 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.753 --rc genhtml_branch_coverage=1 00:05:12.753 --rc genhtml_function_coverage=1 00:05:12.753 --rc genhtml_legend=1 00:05:12.753 --rc geninfo_all_blocks=1 00:05:12.753 --rc geninfo_unexecuted_blocks=1 00:05:12.753 00:05:12.753 ' 00:05:12.753 07:37:02 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.753 --rc genhtml_branch_coverage=1 00:05:12.753 --rc genhtml_function_coverage=1 00:05:12.753 --rc genhtml_legend=1 00:05:12.753 --rc geninfo_all_blocks=1 00:05:12.753 --rc geninfo_unexecuted_blocks=1 00:05:12.753 00:05:12.753 ' 00:05:12.753 07:37:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:12.753 07:37:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:12.753 07:37:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:12.753 07:37:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.753 07:37:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.753 07:37:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.753 ************************************ 00:05:12.753 START TEST skip_rpc 00:05:12.753 ************************************ 00:05:12.753 07:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:12.753 07:37:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57069 00:05:12.753 07:37:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:12.753 07:37:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.753 07:37:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:13.013 [2024-11-29 07:37:02.796889] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:13.013 [2024-11-29 07:37:02.796993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57069 ] 00:05:13.273 [2024-11-29 07:37:02.969037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.273 [2024-11-29 07:37:03.078433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:18.551 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57069 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57069 ']' 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57069 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57069 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.552 killing process with pid 57069 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57069' 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57069 00:05:18.552 07:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57069 00:05:20.465 00:05:20.465 real 0m7.340s 00:05:20.465 user 0m6.892s 00:05:20.465 sys 0m0.368s 00:05:20.465 07:37:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.465 07:37:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.465 ************************************ 00:05:20.465 END TEST skip_rpc 00:05:20.465 ************************************ 00:05:20.465 07:37:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:20.465 07:37:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.465 07:37:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.465 07:37:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.465 ************************************ 00:05:20.465 START TEST skip_rpc_with_json 00:05:20.465 ************************************ 00:05:20.465 07:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:20.465 07:37:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:20.465 07:37:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57179 00:05:20.465 07:37:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.465 07:37:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.465 07:37:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57179 00:05:20.465 07:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57179 ']' 00:05:20.465 07:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.465 07:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.465 07:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.465 07:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.465 07:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.465 [2024-11-29 07:37:10.197470] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:20.465 [2024-11-29 07:37:10.197592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57179 ] 00:05:20.466 [2024-11-29 07:37:10.370916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.725 [2024-11-29 07:37:10.482381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.665 [2024-11-29 07:37:11.309651] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:21.665 request: 00:05:21.665 { 00:05:21.665 "trtype": "tcp", 00:05:21.665 "method": "nvmf_get_transports", 00:05:21.665 "req_id": 1 00:05:21.665 } 00:05:21.665 Got JSON-RPC error response 00:05:21.665 response: 00:05:21.665 { 00:05:21.665 "code": -19, 00:05:21.665 "message": "No such device" 00:05:21.665 } 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.665 [2024-11-29 07:37:11.321751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.665 07:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.665 { 00:05:21.665 "subsystems": [ 00:05:21.665 { 00:05:21.665 "subsystem": "fsdev", 00:05:21.665 "config": [ 00:05:21.665 { 00:05:21.665 "method": "fsdev_set_opts", 00:05:21.665 "params": { 00:05:21.665 "fsdev_io_pool_size": 65535, 00:05:21.665 "fsdev_io_cache_size": 256 00:05:21.665 } 00:05:21.665 } 00:05:21.665 ] 00:05:21.665 }, 00:05:21.665 { 00:05:21.665 "subsystem": "keyring", 00:05:21.665 "config": [] 00:05:21.665 }, 00:05:21.665 { 00:05:21.665 "subsystem": "iobuf", 00:05:21.665 "config": [ 00:05:21.665 { 00:05:21.665 "method": "iobuf_set_options", 00:05:21.665 "params": { 00:05:21.665 "small_pool_count": 8192, 00:05:21.665 "large_pool_count": 1024, 00:05:21.665 "small_bufsize": 8192, 00:05:21.665 "large_bufsize": 135168, 00:05:21.665 "enable_numa": false 00:05:21.665 } 00:05:21.665 } 00:05:21.665 ] 00:05:21.665 }, 00:05:21.665 { 00:05:21.665 "subsystem": "sock", 00:05:21.666 "config": [ 00:05:21.666 { 00:05:21.666 "method": "sock_set_default_impl", 00:05:21.666 "params": { 00:05:21.666 "impl_name": "posix" 00:05:21.666 } 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "method": "sock_impl_set_options", 00:05:21.666 "params": { 00:05:21.666 "impl_name": "ssl", 00:05:21.666 "recv_buf_size": 4096, 00:05:21.666 "send_buf_size": 4096, 00:05:21.666 "enable_recv_pipe": true, 00:05:21.666 "enable_quickack": false, 00:05:21.666 "enable_placement_id": 0, 00:05:21.666 "enable_zerocopy_send_server": true, 00:05:21.666 "enable_zerocopy_send_client": false, 00:05:21.666 "zerocopy_threshold": 0, 00:05:21.666 "tls_version": 0, 00:05:21.666 "enable_ktls": false 00:05:21.666 } 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "method": "sock_impl_set_options", 00:05:21.666 "params": { 00:05:21.666 "impl_name": "posix", 00:05:21.666 "recv_buf_size": 2097152, 00:05:21.666 "send_buf_size": 2097152, 00:05:21.666 "enable_recv_pipe": true, 00:05:21.666 "enable_quickack": false, 00:05:21.666 "enable_placement_id": 0, 00:05:21.666 "enable_zerocopy_send_server": true, 00:05:21.666 "enable_zerocopy_send_client": false, 00:05:21.666 "zerocopy_threshold": 0, 00:05:21.666 "tls_version": 0, 00:05:21.666 "enable_ktls": false 00:05:21.666 } 00:05:21.666 } 00:05:21.666 ] 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "subsystem": "vmd", 00:05:21.666 "config": [] 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "subsystem": "accel", 00:05:21.666 "config": [ 00:05:21.666 { 00:05:21.666 "method": "accel_set_options", 00:05:21.666 "params": { 00:05:21.666 "small_cache_size": 128, 00:05:21.666 "large_cache_size": 16, 00:05:21.666 "task_count": 2048, 00:05:21.666 "sequence_count": 2048, 00:05:21.666 "buf_count": 2048 00:05:21.666 } 00:05:21.666 } 00:05:21.666 ] 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "subsystem": "bdev", 00:05:21.666 "config": [ 00:05:21.666 { 00:05:21.666 "method": "bdev_set_options", 00:05:21.666 "params": { 00:05:21.666 "bdev_io_pool_size": 65535, 00:05:21.666 "bdev_io_cache_size": 256, 00:05:21.666 "bdev_auto_examine": true, 00:05:21.666 "iobuf_small_cache_size": 128, 00:05:21.666 "iobuf_large_cache_size": 16 00:05:21.666 } 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "method": "bdev_raid_set_options", 00:05:21.666 "params": { 00:05:21.666 "process_window_size_kb": 1024, 00:05:21.666 "process_max_bandwidth_mb_sec": 0 00:05:21.666 } 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "method": "bdev_iscsi_set_options", 00:05:21.666 "params": { 00:05:21.666 "timeout_sec": 30 00:05:21.666 } 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "method": "bdev_nvme_set_options", 00:05:21.666 "params": { 00:05:21.666 "action_on_timeout": "none", 00:05:21.666 "timeout_us": 0, 00:05:21.666 "timeout_admin_us": 0, 00:05:21.666 "keep_alive_timeout_ms": 10000, 00:05:21.666 "arbitration_burst": 0, 00:05:21.666 "low_priority_weight": 0, 00:05:21.666 "medium_priority_weight": 0, 00:05:21.666 "high_priority_weight": 0, 00:05:21.666 "nvme_adminq_poll_period_us": 10000, 00:05:21.666 "nvme_ioq_poll_period_us": 0, 00:05:21.666 "io_queue_requests": 0, 00:05:21.666 "delay_cmd_submit": true, 00:05:21.666 "transport_retry_count": 4, 00:05:21.666 "bdev_retry_count": 3, 00:05:21.666 "transport_ack_timeout": 0, 00:05:21.666 "ctrlr_loss_timeout_sec": 0, 00:05:21.666 "reconnect_delay_sec": 0, 00:05:21.666 "fast_io_fail_timeout_sec": 0, 00:05:21.666 "disable_auto_failback": false, 00:05:21.666 "generate_uuids": false, 00:05:21.666 "transport_tos": 0, 00:05:21.666 "nvme_error_stat": false, 00:05:21.666 "rdma_srq_size": 0, 00:05:21.666 "io_path_stat": false, 00:05:21.666 "allow_accel_sequence": false, 00:05:21.666 "rdma_max_cq_size": 0, 00:05:21.666 "rdma_cm_event_timeout_ms": 0, 00:05:21.666 "dhchap_digests": [ 00:05:21.666 "sha256", 00:05:21.666 "sha384", 00:05:21.666 "sha512" 00:05:21.666 ], 00:05:21.666 "dhchap_dhgroups": [ 00:05:21.666 "null", 00:05:21.666 "ffdhe2048", 00:05:21.666 "ffdhe3072", 00:05:21.666 "ffdhe4096", 00:05:21.666 "ffdhe6144", 00:05:21.666 "ffdhe8192" 00:05:21.666 ] 00:05:21.666 } 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "method": "bdev_nvme_set_hotplug", 00:05:21.666 "params": { 00:05:21.666 "period_us": 100000, 00:05:21.666 "enable": false 00:05:21.666 } 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "method": "bdev_wait_for_examine" 00:05:21.666 } 00:05:21.666 ] 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "subsystem": "scsi", 00:05:21.666 "config": null 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "subsystem": "scheduler", 00:05:21.666 "config": [ 00:05:21.666 { 00:05:21.666 "method": "framework_set_scheduler", 00:05:21.666 "params": { 00:05:21.666 "name": "static" 00:05:21.666 } 00:05:21.666 } 00:05:21.666 ] 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "subsystem": "vhost_scsi", 00:05:21.666 "config": [] 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "subsystem": "vhost_blk", 00:05:21.666 "config": [] 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "subsystem": "ublk", 00:05:21.666 "config": [] 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "subsystem": "nbd", 00:05:21.666 "config": [] 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "subsystem": "nvmf", 00:05:21.666 "config": [ 00:05:21.666 { 00:05:21.666 "method": "nvmf_set_config", 00:05:21.666 "params": { 00:05:21.666 "discovery_filter": "match_any", 00:05:21.666 "admin_cmd_passthru": { 00:05:21.666 "identify_ctrlr": false 00:05:21.666 }, 00:05:21.666 "dhchap_digests": [ 00:05:21.666 "sha256", 00:05:21.666 "sha384", 00:05:21.666 "sha512" 00:05:21.666 ], 00:05:21.666 "dhchap_dhgroups": [ 00:05:21.666 "null", 00:05:21.666 "ffdhe2048", 00:05:21.666 "ffdhe3072", 00:05:21.666 "ffdhe4096", 00:05:21.666 "ffdhe6144", 00:05:21.666 "ffdhe8192" 00:05:21.666 ] 00:05:21.666 } 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "method": "nvmf_set_max_subsystems", 00:05:21.666 "params": { 00:05:21.666 "max_subsystems": 1024 00:05:21.666 } 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "method": "nvmf_set_crdt", 00:05:21.666 "params": { 00:05:21.666 "crdt1": 0, 00:05:21.666 "crdt2": 0, 00:05:21.666 "crdt3": 0 00:05:21.666 } 00:05:21.666 }, 00:05:21.666 { 00:05:21.666 "method": "nvmf_create_transport", 00:05:21.666 "params": { 00:05:21.667 "trtype": "TCP", 00:05:21.667 "max_queue_depth": 128, 00:05:21.667 "max_io_qpairs_per_ctrlr": 127, 00:05:21.667 "in_capsule_data_size": 4096, 00:05:21.667 "max_io_size": 131072, 00:05:21.667 "io_unit_size": 131072, 00:05:21.667 "max_aq_depth": 128, 00:05:21.667 "num_shared_buffers": 511, 00:05:21.667 "buf_cache_size": 4294967295, 00:05:21.667 "dif_insert_or_strip": false, 00:05:21.667 "zcopy": false, 00:05:21.667 "c2h_success": true, 00:05:21.667 "sock_priority": 0, 00:05:21.667 "abort_timeout_sec": 1, 00:05:21.667 "ack_timeout": 0, 00:05:21.667 "data_wr_pool_size": 0 00:05:21.667 } 00:05:21.667 } 00:05:21.667 ] 00:05:21.667 }, 00:05:21.667 { 00:05:21.667 "subsystem": "iscsi", 00:05:21.667 "config": [ 00:05:21.667 { 00:05:21.667 "method": "iscsi_set_options", 00:05:21.667 "params": { 00:05:21.667 "node_base": "iqn.2016-06.io.spdk", 00:05:21.667 "max_sessions": 128, 00:05:21.667 "max_connections_per_session": 2, 00:05:21.667 "max_queue_depth": 64, 00:05:21.667 "default_time2wait": 2, 00:05:21.667 "default_time2retain": 20, 00:05:21.667 "first_burst_length": 8192, 00:05:21.667 "immediate_data": true, 00:05:21.667 "allow_duplicated_isid": false, 00:05:21.667 "error_recovery_level": 0, 00:05:21.667 "nop_timeout": 60, 00:05:21.667 "nop_in_interval": 30, 00:05:21.667 "disable_chap": false, 00:05:21.667 "require_chap": false, 00:05:21.667 "mutual_chap": false, 00:05:21.667 "chap_group": 0, 00:05:21.667 "max_large_datain_per_connection": 64, 00:05:21.667 "max_r2t_per_connection": 4, 00:05:21.667 "pdu_pool_size": 36864, 00:05:21.667 "immediate_data_pool_size": 16384, 00:05:21.667 "data_out_pool_size": 2048 00:05:21.667 } 00:05:21.667 } 00:05:21.667 ] 00:05:21.667 } 00:05:21.667 ] 00:05:21.667 } 00:05:21.667 07:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:21.667 07:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57179 00:05:21.667 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57179 ']' 00:05:21.667 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57179 00:05:21.667 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:21.667 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.667 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57179 00:05:21.667 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.667 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.667 killing process with pid 57179 00:05:21.667 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57179' 00:05:21.667 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57179 00:05:21.667 07:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57179 00:05:24.207 07:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57224 00:05:24.207 07:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:24.207 07:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:29.519 07:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57224 00:05:29.519 07:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57224 ']' 00:05:29.519 07:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57224 00:05:29.519 07:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:29.519 07:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.519 07:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57224 00:05:29.519 07:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.519 07:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.519 killing process with pid 57224 00:05:29.519 07:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57224' 00:05:29.519 07:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57224 00:05:29.519 07:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57224 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.430 00:05:31.430 real 0m11.073s 00:05:31.430 user 0m10.541s 00:05:31.430 sys 0m0.820s 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.430 ************************************ 00:05:31.430 END TEST skip_rpc_with_json 00:05:31.430 ************************************ 00:05:31.430 07:37:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:31.430 07:37:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.430 07:37:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.430 07:37:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.430 ************************************ 00:05:31.430 START TEST skip_rpc_with_delay 00:05:31.430 ************************************ 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:31.430 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.430 [2024-11-29 07:37:21.343667] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:31.690 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:31.690 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.690 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.690 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.690 00:05:31.690 real 0m0.165s 00:05:31.690 user 0m0.079s 00:05:31.690 sys 0m0.084s 00:05:31.690 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.690 07:37:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:31.690 ************************************ 00:05:31.690 END TEST skip_rpc_with_delay 00:05:31.690 ************************************ 00:05:31.690 07:37:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:31.690 07:37:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:31.690 07:37:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:31.690 07:37:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.690 07:37:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.690 07:37:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.690 ************************************ 00:05:31.690 START TEST exit_on_failed_rpc_init 00:05:31.690 ************************************ 00:05:31.690 07:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:31.690 07:37:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57358 00:05:31.690 07:37:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.691 07:37:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57358 00:05:31.691 07:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57358 ']' 00:05:31.691 07:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.691 07:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.691 07:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.691 07:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.691 07:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.691 [2024-11-29 07:37:21.579243] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:31.691 [2024-11-29 07:37:21.579393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57358 ] 00:05:31.950 [2024-11-29 07:37:21.753236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.950 [2024-11-29 07:37:21.855151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:32.932 07:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.932 [2024-11-29 07:37:22.792779] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:32.932 [2024-11-29 07:37:22.792914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57381 ] 00:05:33.192 [2024-11-29 07:37:22.955221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.192 [2024-11-29 07:37:23.066367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.192 [2024-11-29 07:37:23.066476] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:33.192 [2024-11-29 07:37:23.066490] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:33.192 [2024-11-29 07:37:23.066500] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.450 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:33.450 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.450 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:33.450 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:33.450 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:33.450 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.450 07:37:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:33.450 07:37:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57358 00:05:33.450 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57358 ']' 00:05:33.450 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57358 00:05:33.450 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:33.451 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.451 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57358 00:05:33.451 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.451 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.451 killing process with pid 57358 00:05:33.451 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57358' 00:05:33.451 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57358 00:05:33.451 07:37:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57358 00:05:36.000 00:05:36.001 real 0m4.152s 00:05:36.001 user 0m4.480s 00:05:36.001 sys 0m0.535s 00:05:36.001 07:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.001 07:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.001 ************************************ 00:05:36.001 END TEST exit_on_failed_rpc_init 00:05:36.001 ************************************ 00:05:36.001 07:37:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:36.001 00:05:36.001 real 0m23.224s 00:05:36.001 user 0m22.203s 00:05:36.001 sys 0m2.104s 00:05:36.001 07:37:25 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.001 07:37:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.001 ************************************ 00:05:36.001 END TEST skip_rpc 00:05:36.001 ************************************ 00:05:36.001 07:37:25 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:36.001 07:37:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.001 07:37:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.001 07:37:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.001 ************************************ 00:05:36.001 START TEST rpc_client 00:05:36.001 ************************************ 00:05:36.001 07:37:25 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:36.001 * Looking for test storage... 00:05:36.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:36.001 07:37:25 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.001 07:37:25 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.001 07:37:25 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.001 07:37:25 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.001 07:37:25 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.001 07:37:25 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.001 07:37:25 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.001 07:37:25 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.001 07:37:25 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.261 07:37:25 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:36.261 07:37:25 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.261 07:37:25 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.261 --rc genhtml_branch_coverage=1 00:05:36.261 --rc genhtml_function_coverage=1 00:05:36.261 --rc genhtml_legend=1 00:05:36.261 --rc geninfo_all_blocks=1 00:05:36.261 --rc geninfo_unexecuted_blocks=1 00:05:36.261 00:05:36.261 ' 00:05:36.261 07:37:25 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.261 --rc genhtml_branch_coverage=1 00:05:36.261 --rc genhtml_function_coverage=1 00:05:36.261 --rc genhtml_legend=1 00:05:36.261 --rc geninfo_all_blocks=1 00:05:36.261 --rc geninfo_unexecuted_blocks=1 00:05:36.261 00:05:36.261 ' 00:05:36.261 07:37:25 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.261 --rc genhtml_branch_coverage=1 00:05:36.261 --rc genhtml_function_coverage=1 00:05:36.261 --rc genhtml_legend=1 00:05:36.261 --rc geninfo_all_blocks=1 00:05:36.261 --rc geninfo_unexecuted_blocks=1 00:05:36.261 00:05:36.261 ' 00:05:36.261 07:37:25 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.261 --rc genhtml_branch_coverage=1 00:05:36.261 --rc genhtml_function_coverage=1 00:05:36.261 --rc genhtml_legend=1 00:05:36.261 --rc geninfo_all_blocks=1 00:05:36.261 --rc geninfo_unexecuted_blocks=1 00:05:36.261 00:05:36.261 ' 00:05:36.261 07:37:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:36.261 OK 00:05:36.261 07:37:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:36.261 ************************************ 00:05:36.261 END TEST rpc_client 00:05:36.261 ************************************ 00:05:36.261 00:05:36.261 real 0m0.282s 00:05:36.261 user 0m0.153s 00:05:36.261 sys 0m0.146s 00:05:36.261 07:37:26 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.261 07:37:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:36.261 07:37:26 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:36.261 07:37:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.261 07:37:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.261 07:37:26 -- common/autotest_common.sh@10 -- # set +x 00:05:36.261 ************************************ 00:05:36.261 START TEST json_config 00:05:36.261 ************************************ 00:05:36.261 07:37:26 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:36.261 07:37:26 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.261 07:37:26 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.261 07:37:26 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.522 07:37:26 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.522 07:37:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.522 07:37:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.522 07:37:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.522 07:37:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.522 07:37:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.522 07:37:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.522 07:37:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.522 07:37:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.522 07:37:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.522 07:37:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.522 07:37:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.522 07:37:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:36.522 07:37:26 json_config -- scripts/common.sh@345 -- # : 1 00:05:36.522 07:37:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.522 07:37:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.522 07:37:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:36.522 07:37:26 json_config -- scripts/common.sh@353 -- # local d=1 00:05:36.522 07:37:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.522 07:37:26 json_config -- scripts/common.sh@355 -- # echo 1 00:05:36.522 07:37:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.522 07:37:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:36.522 07:37:26 json_config -- scripts/common.sh@353 -- # local d=2 00:05:36.522 07:37:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.522 07:37:26 json_config -- scripts/common.sh@355 -- # echo 2 00:05:36.522 07:37:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.522 07:37:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.522 07:37:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.522 07:37:26 json_config -- scripts/common.sh@368 -- # return 0 00:05:36.522 07:37:26 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.522 07:37:26 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.522 --rc genhtml_branch_coverage=1 00:05:36.522 --rc genhtml_function_coverage=1 00:05:36.522 --rc genhtml_legend=1 00:05:36.522 --rc geninfo_all_blocks=1 00:05:36.522 --rc geninfo_unexecuted_blocks=1 00:05:36.522 00:05:36.522 ' 00:05:36.522 07:37:26 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.522 --rc genhtml_branch_coverage=1 00:05:36.522 --rc genhtml_function_coverage=1 00:05:36.522 --rc genhtml_legend=1 00:05:36.522 --rc geninfo_all_blocks=1 00:05:36.522 --rc geninfo_unexecuted_blocks=1 00:05:36.522 00:05:36.522 ' 00:05:36.522 07:37:26 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.522 --rc genhtml_branch_coverage=1 00:05:36.522 --rc genhtml_function_coverage=1 00:05:36.522 --rc genhtml_legend=1 00:05:36.522 --rc geninfo_all_blocks=1 00:05:36.522 --rc geninfo_unexecuted_blocks=1 00:05:36.522 00:05:36.522 ' 00:05:36.522 07:37:26 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.522 --rc genhtml_branch_coverage=1 00:05:36.522 --rc genhtml_function_coverage=1 00:05:36.522 --rc genhtml_legend=1 00:05:36.522 --rc geninfo_all_blocks=1 00:05:36.522 --rc geninfo_unexecuted_blocks=1 00:05:36.522 00:05:36.522 ' 00:05:36.522 07:37:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:36.522 07:37:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:36.522 07:37:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.522 07:37:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.522 07:37:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.522 07:37:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.522 07:37:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.522 07:37:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.522 07:37:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.522 07:37:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.522 07:37:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.522 07:37:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8a55aa8-6913-4d26-998f-a1da9bb68def 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c8a55aa8-6913-4d26-998f-a1da9bb68def 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:36.523 07:37:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.523 07:37:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.523 07:37:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.523 07:37:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.523 07:37:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.523 07:37:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.523 07:37:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.523 07:37:26 json_config -- paths/export.sh@5 -- # export PATH 00:05:36.523 07:37:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@51 -- # : 0 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.523 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.523 07:37:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.523 07:37:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:36.523 07:37:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:36.523 07:37:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:36.523 07:37:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:36.523 07:37:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:36.523 WARNING: No tests are enabled so not running JSON configuration tests 00:05:36.523 07:37:26 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:36.523 07:37:26 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:36.523 00:05:36.523 real 0m0.219s 00:05:36.523 user 0m0.129s 00:05:36.523 sys 0m0.099s 00:05:36.523 07:37:26 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.523 07:37:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.523 ************************************ 00:05:36.523 END TEST json_config 00:05:36.523 ************************************ 00:05:36.523 07:37:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:36.523 07:37:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.523 07:37:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.523 07:37:26 -- common/autotest_common.sh@10 -- # set +x 00:05:36.523 ************************************ 00:05:36.523 START TEST json_config_extra_key 00:05:36.523 ************************************ 00:05:36.523 07:37:26 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:36.523 07:37:26 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.523 07:37:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.523 07:37:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.783 07:37:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.783 07:37:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:36.783 07:37:26 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.783 07:37:26 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.783 --rc genhtml_branch_coverage=1 00:05:36.783 --rc genhtml_function_coverage=1 00:05:36.783 --rc genhtml_legend=1 00:05:36.783 --rc geninfo_all_blocks=1 00:05:36.783 --rc geninfo_unexecuted_blocks=1 00:05:36.783 00:05:36.783 ' 00:05:36.783 07:37:26 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.783 --rc genhtml_branch_coverage=1 00:05:36.783 --rc genhtml_function_coverage=1 00:05:36.783 --rc genhtml_legend=1 00:05:36.783 --rc geninfo_all_blocks=1 00:05:36.783 --rc geninfo_unexecuted_blocks=1 00:05:36.783 00:05:36.783 ' 00:05:36.783 07:37:26 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.783 --rc genhtml_branch_coverage=1 00:05:36.783 --rc genhtml_function_coverage=1 00:05:36.783 --rc genhtml_legend=1 00:05:36.783 --rc geninfo_all_blocks=1 00:05:36.783 --rc geninfo_unexecuted_blocks=1 00:05:36.783 00:05:36.783 ' 00:05:36.783 07:37:26 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.783 --rc genhtml_branch_coverage=1 00:05:36.783 --rc genhtml_function_coverage=1 00:05:36.783 --rc genhtml_legend=1 00:05:36.783 --rc geninfo_all_blocks=1 00:05:36.783 --rc geninfo_unexecuted_blocks=1 00:05:36.783 00:05:36.783 ' 00:05:36.783 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:36.783 07:37:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:36.783 07:37:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.783 07:37:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.783 07:37:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8a55aa8-6913-4d26-998f-a1da9bb68def 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c8a55aa8-6913-4d26-998f-a1da9bb68def 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:36.784 07:37:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.784 07:37:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.784 07:37:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.784 07:37:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.784 07:37:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.784 07:37:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.784 07:37:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.784 07:37:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:36.784 07:37:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.784 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.784 07:37:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.784 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:36.784 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:36.784 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:36.784 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:36.784 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:36.784 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:36.784 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:36.784 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:36.784 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:36.784 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.784 INFO: launching applications... 00:05:36.784 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:36.784 07:37:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:36.784 07:37:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:36.784 07:37:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:36.784 07:37:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.784 07:37:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.784 07:37:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.784 07:37:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.784 07:37:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.784 07:37:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57586 00:05:36.784 07:37:26 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:36.784 Waiting for target to run... 00:05:36.784 07:37:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.784 07:37:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57586 /var/tmp/spdk_tgt.sock 00:05:36.784 07:37:26 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57586 ']' 00:05:36.784 07:37:26 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.784 07:37:26 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.784 07:37:26 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.784 07:37:26 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.784 07:37:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.784 [2024-11-29 07:37:26.652040] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:36.784 [2024-11-29 07:37:26.652180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57586 ] 00:05:37.354 [2024-11-29 07:37:27.041595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.354 [2024-11-29 07:37:27.141429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.924 07:37:27 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.924 07:37:27 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:37.924 00:05:37.924 07:37:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:37.924 INFO: shutting down applications... 00:05:37.924 07:37:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:37.924 07:37:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:37.924 07:37:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:37.924 07:37:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:37.924 07:37:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57586 ]] 00:05:37.924 07:37:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57586 00:05:37.924 07:37:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:37.924 07:37:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.924 07:37:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57586 00:05:37.924 07:37:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.493 07:37:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.493 07:37:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.493 07:37:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57586 00:05:38.493 07:37:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.062 07:37:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.062 07:37:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.062 07:37:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57586 00:05:39.062 07:37:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.631 07:37:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.631 07:37:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.631 07:37:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57586 00:05:39.631 07:37:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.199 07:37:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.199 07:37:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.199 07:37:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57586 00:05:40.199 07:37:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.459 07:37:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.459 07:37:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.459 07:37:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57586 00:05:40.459 07:37:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.030 07:37:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.030 07:37:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.030 07:37:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57586 00:05:41.030 07:37:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:41.030 07:37:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:41.030 07:37:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:41.030 SPDK target shutdown done 00:05:41.030 07:37:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:41.030 Success 00:05:41.030 07:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:41.030 00:05:41.030 real 0m4.506s 00:05:41.030 user 0m3.865s 00:05:41.030 sys 0m0.537s 00:05:41.030 07:37:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.030 07:37:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.030 ************************************ 00:05:41.030 END TEST json_config_extra_key 00:05:41.030 ************************************ 00:05:41.030 07:37:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.030 07:37:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.030 07:37:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.030 07:37:30 -- common/autotest_common.sh@10 -- # set +x 00:05:41.030 ************************************ 00:05:41.030 START TEST alias_rpc 00:05:41.030 ************************************ 00:05:41.030 07:37:30 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.290 * Looking for test storage... 00:05:41.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:41.290 07:37:31 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.290 07:37:31 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.290 07:37:31 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.290 07:37:31 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.290 07:37:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:41.290 07:37:31 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.290 07:37:31 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.290 --rc genhtml_branch_coverage=1 00:05:41.290 --rc genhtml_function_coverage=1 00:05:41.290 --rc genhtml_legend=1 00:05:41.290 --rc geninfo_all_blocks=1 00:05:41.290 --rc geninfo_unexecuted_blocks=1 00:05:41.290 00:05:41.290 ' 00:05:41.290 07:37:31 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.290 --rc genhtml_branch_coverage=1 00:05:41.290 --rc genhtml_function_coverage=1 00:05:41.290 --rc genhtml_legend=1 00:05:41.290 --rc geninfo_all_blocks=1 00:05:41.290 --rc geninfo_unexecuted_blocks=1 00:05:41.290 00:05:41.290 ' 00:05:41.290 07:37:31 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.290 --rc genhtml_branch_coverage=1 00:05:41.290 --rc genhtml_function_coverage=1 00:05:41.290 --rc genhtml_legend=1 00:05:41.290 --rc geninfo_all_blocks=1 00:05:41.290 --rc geninfo_unexecuted_blocks=1 00:05:41.290 00:05:41.290 ' 00:05:41.290 07:37:31 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.291 --rc genhtml_branch_coverage=1 00:05:41.291 --rc genhtml_function_coverage=1 00:05:41.291 --rc genhtml_legend=1 00:05:41.291 --rc geninfo_all_blocks=1 00:05:41.291 --rc geninfo_unexecuted_blocks=1 00:05:41.291 00:05:41.291 ' 00:05:41.291 07:37:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.291 07:37:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57697 00:05:41.291 07:37:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57697 00:05:41.291 07:37:31 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57697 ']' 00:05:41.291 07:37:31 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.291 07:37:31 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.291 07:37:31 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.291 07:37:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.291 07:37:31 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.291 07:37:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.551 [2024-11-29 07:37:31.269614] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:41.551 [2024-11-29 07:37:31.269738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57697 ] 00:05:41.551 [2024-11-29 07:37:31.442560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.810 [2024-11-29 07:37:31.547499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.749 07:37:32 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.749 07:37:32 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:42.749 07:37:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:42.749 07:37:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57697 00:05:42.749 07:37:32 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57697 ']' 00:05:42.749 07:37:32 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57697 00:05:42.749 07:37:32 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:42.749 07:37:32 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.749 07:37:32 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57697 00:05:42.749 07:37:32 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.749 07:37:32 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.749 killing process with pid 57697 00:05:42.749 07:37:32 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57697' 00:05:42.749 07:37:32 alias_rpc -- common/autotest_common.sh@973 -- # kill 57697 00:05:42.749 07:37:32 alias_rpc -- common/autotest_common.sh@978 -- # wait 57697 00:05:45.287 00:05:45.287 real 0m3.993s 00:05:45.287 user 0m3.990s 00:05:45.287 sys 0m0.537s 00:05:45.287 07:37:34 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.287 07:37:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.287 ************************************ 00:05:45.287 END TEST alias_rpc 00:05:45.287 ************************************ 00:05:45.287 07:37:34 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:45.287 07:37:35 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:45.287 07:37:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.287 07:37:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.287 07:37:35 -- common/autotest_common.sh@10 -- # set +x 00:05:45.287 ************************************ 00:05:45.287 START TEST spdkcli_tcp 00:05:45.287 ************************************ 00:05:45.287 07:37:35 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:45.287 * Looking for test storage... 00:05:45.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:45.287 07:37:35 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.287 07:37:35 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.287 07:37:35 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.287 07:37:35 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.287 07:37:35 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:45.287 07:37:35 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.287 07:37:35 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.287 --rc genhtml_branch_coverage=1 00:05:45.287 --rc genhtml_function_coverage=1 00:05:45.287 --rc genhtml_legend=1 00:05:45.287 --rc geninfo_all_blocks=1 00:05:45.287 --rc geninfo_unexecuted_blocks=1 00:05:45.287 00:05:45.287 ' 00:05:45.287 07:37:35 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.287 --rc genhtml_branch_coverage=1 00:05:45.287 --rc genhtml_function_coverage=1 00:05:45.287 --rc genhtml_legend=1 00:05:45.287 --rc geninfo_all_blocks=1 00:05:45.287 --rc geninfo_unexecuted_blocks=1 00:05:45.287 00:05:45.287 ' 00:05:45.287 07:37:35 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.287 --rc genhtml_branch_coverage=1 00:05:45.287 --rc genhtml_function_coverage=1 00:05:45.287 --rc genhtml_legend=1 00:05:45.287 --rc geninfo_all_blocks=1 00:05:45.287 --rc geninfo_unexecuted_blocks=1 00:05:45.287 00:05:45.287 ' 00:05:45.287 07:37:35 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.287 --rc genhtml_branch_coverage=1 00:05:45.287 --rc genhtml_function_coverage=1 00:05:45.287 --rc genhtml_legend=1 00:05:45.287 --rc geninfo_all_blocks=1 00:05:45.287 --rc geninfo_unexecuted_blocks=1 00:05:45.287 00:05:45.287 ' 00:05:45.287 07:37:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:45.287 07:37:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:45.287 07:37:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:45.287 07:37:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:45.287 07:37:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:45.287 07:37:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:45.547 07:37:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:45.547 07:37:35 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.547 07:37:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.547 07:37:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57803 00:05:45.547 07:37:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:45.547 07:37:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57803 00:05:45.547 07:37:35 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57803 ']' 00:05:45.547 07:37:35 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.547 07:37:35 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.547 07:37:35 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.547 07:37:35 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.547 07:37:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.547 [2024-11-29 07:37:35.334428] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:45.547 [2024-11-29 07:37:35.334558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57803 ] 00:05:45.806 [2024-11-29 07:37:35.509505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.806 [2024-11-29 07:37:35.615522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.806 [2024-11-29 07:37:35.615558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.744 07:37:36 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.744 07:37:36 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:46.744 07:37:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57821 00:05:46.744 07:37:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:46.744 07:37:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:46.744 [ 00:05:46.744 "bdev_malloc_delete", 00:05:46.744 "bdev_malloc_create", 00:05:46.744 "bdev_null_resize", 00:05:46.744 "bdev_null_delete", 00:05:46.744 "bdev_null_create", 00:05:46.744 "bdev_nvme_cuse_unregister", 00:05:46.744 "bdev_nvme_cuse_register", 00:05:46.744 "bdev_opal_new_user", 00:05:46.744 "bdev_opal_set_lock_state", 00:05:46.744 "bdev_opal_delete", 00:05:46.744 "bdev_opal_get_info", 00:05:46.744 "bdev_opal_create", 00:05:46.744 "bdev_nvme_opal_revert", 00:05:46.744 "bdev_nvme_opal_init", 00:05:46.744 "bdev_nvme_send_cmd", 00:05:46.744 "bdev_nvme_set_keys", 00:05:46.744 "bdev_nvme_get_path_iostat", 00:05:46.744 "bdev_nvme_get_mdns_discovery_info", 00:05:46.744 "bdev_nvme_stop_mdns_discovery", 00:05:46.744 "bdev_nvme_start_mdns_discovery", 00:05:46.744 "bdev_nvme_set_multipath_policy", 00:05:46.744 "bdev_nvme_set_preferred_path", 00:05:46.744 "bdev_nvme_get_io_paths", 00:05:46.744 "bdev_nvme_remove_error_injection", 00:05:46.744 "bdev_nvme_add_error_injection", 00:05:46.744 "bdev_nvme_get_discovery_info", 00:05:46.744 "bdev_nvme_stop_discovery", 00:05:46.745 "bdev_nvme_start_discovery", 00:05:46.745 "bdev_nvme_get_controller_health_info", 00:05:46.745 "bdev_nvme_disable_controller", 00:05:46.745 "bdev_nvme_enable_controller", 00:05:46.745 "bdev_nvme_reset_controller", 00:05:46.745 "bdev_nvme_get_transport_statistics", 00:05:46.745 "bdev_nvme_apply_firmware", 00:05:46.745 "bdev_nvme_detach_controller", 00:05:46.745 "bdev_nvme_get_controllers", 00:05:46.745 "bdev_nvme_attach_controller", 00:05:46.745 "bdev_nvme_set_hotplug", 00:05:46.745 "bdev_nvme_set_options", 00:05:46.745 "bdev_passthru_delete", 00:05:46.745 "bdev_passthru_create", 00:05:46.745 "bdev_lvol_set_parent_bdev", 00:05:46.745 "bdev_lvol_set_parent", 00:05:46.745 "bdev_lvol_check_shallow_copy", 00:05:46.745 "bdev_lvol_start_shallow_copy", 00:05:46.745 "bdev_lvol_grow_lvstore", 00:05:46.745 "bdev_lvol_get_lvols", 00:05:46.745 "bdev_lvol_get_lvstores", 00:05:46.745 "bdev_lvol_delete", 00:05:46.745 "bdev_lvol_set_read_only", 00:05:46.745 "bdev_lvol_resize", 00:05:46.745 "bdev_lvol_decouple_parent", 00:05:46.745 "bdev_lvol_inflate", 00:05:46.745 "bdev_lvol_rename", 00:05:46.745 "bdev_lvol_clone_bdev", 00:05:46.745 "bdev_lvol_clone", 00:05:46.745 "bdev_lvol_snapshot", 00:05:46.745 "bdev_lvol_create", 00:05:46.745 "bdev_lvol_delete_lvstore", 00:05:46.745 "bdev_lvol_rename_lvstore", 00:05:46.745 "bdev_lvol_create_lvstore", 00:05:46.745 "bdev_raid_set_options", 00:05:46.745 "bdev_raid_remove_base_bdev", 00:05:46.745 "bdev_raid_add_base_bdev", 00:05:46.745 "bdev_raid_delete", 00:05:46.745 "bdev_raid_create", 00:05:46.745 "bdev_raid_get_bdevs", 00:05:46.745 "bdev_error_inject_error", 00:05:46.745 "bdev_error_delete", 00:05:46.745 "bdev_error_create", 00:05:46.745 "bdev_split_delete", 00:05:46.745 "bdev_split_create", 00:05:46.745 "bdev_delay_delete", 00:05:46.745 "bdev_delay_create", 00:05:46.745 "bdev_delay_update_latency", 00:05:46.745 "bdev_zone_block_delete", 00:05:46.745 "bdev_zone_block_create", 00:05:46.745 "blobfs_create", 00:05:46.745 "blobfs_detect", 00:05:46.745 "blobfs_set_cache_size", 00:05:46.745 "bdev_aio_delete", 00:05:46.745 "bdev_aio_rescan", 00:05:46.745 "bdev_aio_create", 00:05:46.745 "bdev_ftl_set_property", 00:05:46.745 "bdev_ftl_get_properties", 00:05:46.745 "bdev_ftl_get_stats", 00:05:46.745 "bdev_ftl_unmap", 00:05:46.745 "bdev_ftl_unload", 00:05:46.745 "bdev_ftl_delete", 00:05:46.745 "bdev_ftl_load", 00:05:46.745 "bdev_ftl_create", 00:05:46.745 "bdev_virtio_attach_controller", 00:05:46.745 "bdev_virtio_scsi_get_devices", 00:05:46.745 "bdev_virtio_detach_controller", 00:05:46.745 "bdev_virtio_blk_set_hotplug", 00:05:46.745 "bdev_iscsi_delete", 00:05:46.745 "bdev_iscsi_create", 00:05:46.745 "bdev_iscsi_set_options", 00:05:46.745 "accel_error_inject_error", 00:05:46.745 "ioat_scan_accel_module", 00:05:46.745 "dsa_scan_accel_module", 00:05:46.745 "iaa_scan_accel_module", 00:05:46.745 "keyring_file_remove_key", 00:05:46.745 "keyring_file_add_key", 00:05:46.745 "keyring_linux_set_options", 00:05:46.745 "fsdev_aio_delete", 00:05:46.745 "fsdev_aio_create", 00:05:46.745 "iscsi_get_histogram", 00:05:46.745 "iscsi_enable_histogram", 00:05:46.745 "iscsi_set_options", 00:05:46.745 "iscsi_get_auth_groups", 00:05:46.745 "iscsi_auth_group_remove_secret", 00:05:46.745 "iscsi_auth_group_add_secret", 00:05:46.745 "iscsi_delete_auth_group", 00:05:46.745 "iscsi_create_auth_group", 00:05:46.745 "iscsi_set_discovery_auth", 00:05:46.745 "iscsi_get_options", 00:05:46.745 "iscsi_target_node_request_logout", 00:05:46.745 "iscsi_target_node_set_redirect", 00:05:46.745 "iscsi_target_node_set_auth", 00:05:46.745 "iscsi_target_node_add_lun", 00:05:46.745 "iscsi_get_stats", 00:05:46.745 "iscsi_get_connections", 00:05:46.745 "iscsi_portal_group_set_auth", 00:05:46.745 "iscsi_start_portal_group", 00:05:46.745 "iscsi_delete_portal_group", 00:05:46.745 "iscsi_create_portal_group", 00:05:46.745 "iscsi_get_portal_groups", 00:05:46.745 "iscsi_delete_target_node", 00:05:46.745 "iscsi_target_node_remove_pg_ig_maps", 00:05:46.745 "iscsi_target_node_add_pg_ig_maps", 00:05:46.745 "iscsi_create_target_node", 00:05:46.745 "iscsi_get_target_nodes", 00:05:46.745 "iscsi_delete_initiator_group", 00:05:46.745 "iscsi_initiator_group_remove_initiators", 00:05:46.745 "iscsi_initiator_group_add_initiators", 00:05:46.745 "iscsi_create_initiator_group", 00:05:46.745 "iscsi_get_initiator_groups", 00:05:46.745 "nvmf_set_crdt", 00:05:46.745 "nvmf_set_config", 00:05:46.745 "nvmf_set_max_subsystems", 00:05:46.745 "nvmf_stop_mdns_prr", 00:05:46.745 "nvmf_publish_mdns_prr", 00:05:46.745 "nvmf_subsystem_get_listeners", 00:05:46.745 "nvmf_subsystem_get_qpairs", 00:05:46.745 "nvmf_subsystem_get_controllers", 00:05:46.745 "nvmf_get_stats", 00:05:46.745 "nvmf_get_transports", 00:05:46.745 "nvmf_create_transport", 00:05:46.745 "nvmf_get_targets", 00:05:46.745 "nvmf_delete_target", 00:05:46.745 "nvmf_create_target", 00:05:46.745 "nvmf_subsystem_allow_any_host", 00:05:46.745 "nvmf_subsystem_set_keys", 00:05:46.745 "nvmf_subsystem_remove_host", 00:05:46.745 "nvmf_subsystem_add_host", 00:05:46.745 "nvmf_ns_remove_host", 00:05:46.745 "nvmf_ns_add_host", 00:05:46.745 "nvmf_subsystem_remove_ns", 00:05:46.745 "nvmf_subsystem_set_ns_ana_group", 00:05:46.745 "nvmf_subsystem_add_ns", 00:05:46.745 "nvmf_subsystem_listener_set_ana_state", 00:05:46.745 "nvmf_discovery_get_referrals", 00:05:46.745 "nvmf_discovery_remove_referral", 00:05:46.745 "nvmf_discovery_add_referral", 00:05:46.745 "nvmf_subsystem_remove_listener", 00:05:46.745 "nvmf_subsystem_add_listener", 00:05:46.745 "nvmf_delete_subsystem", 00:05:46.745 "nvmf_create_subsystem", 00:05:46.745 "nvmf_get_subsystems", 00:05:46.745 "env_dpdk_get_mem_stats", 00:05:46.745 "nbd_get_disks", 00:05:46.745 "nbd_stop_disk", 00:05:46.745 "nbd_start_disk", 00:05:46.745 "ublk_recover_disk", 00:05:46.745 "ublk_get_disks", 00:05:46.745 "ublk_stop_disk", 00:05:46.745 "ublk_start_disk", 00:05:46.745 "ublk_destroy_target", 00:05:46.745 "ublk_create_target", 00:05:46.745 "virtio_blk_create_transport", 00:05:46.745 "virtio_blk_get_transports", 00:05:46.745 "vhost_controller_set_coalescing", 00:05:46.745 "vhost_get_controllers", 00:05:46.745 "vhost_delete_controller", 00:05:46.745 "vhost_create_blk_controller", 00:05:46.745 "vhost_scsi_controller_remove_target", 00:05:46.745 "vhost_scsi_controller_add_target", 00:05:46.745 "vhost_start_scsi_controller", 00:05:46.745 "vhost_create_scsi_controller", 00:05:46.745 "thread_set_cpumask", 00:05:46.745 "scheduler_set_options", 00:05:46.745 "framework_get_governor", 00:05:46.745 "framework_get_scheduler", 00:05:46.745 "framework_set_scheduler", 00:05:46.745 "framework_get_reactors", 00:05:46.745 "thread_get_io_channels", 00:05:46.745 "thread_get_pollers", 00:05:46.745 "thread_get_stats", 00:05:46.745 "framework_monitor_context_switch", 00:05:46.745 "spdk_kill_instance", 00:05:46.745 "log_enable_timestamps", 00:05:46.745 "log_get_flags", 00:05:46.745 "log_clear_flag", 00:05:46.745 "log_set_flag", 00:05:46.745 "log_get_level", 00:05:46.745 "log_set_level", 00:05:46.745 "log_get_print_level", 00:05:46.745 "log_set_print_level", 00:05:46.745 "framework_enable_cpumask_locks", 00:05:46.745 "framework_disable_cpumask_locks", 00:05:46.745 "framework_wait_init", 00:05:46.745 "framework_start_init", 00:05:46.745 "scsi_get_devices", 00:05:46.745 "bdev_get_histogram", 00:05:46.745 "bdev_enable_histogram", 00:05:46.745 "bdev_set_qos_limit", 00:05:46.745 "bdev_set_qd_sampling_period", 00:05:46.745 "bdev_get_bdevs", 00:05:46.745 "bdev_reset_iostat", 00:05:46.745 "bdev_get_iostat", 00:05:46.745 "bdev_examine", 00:05:46.745 "bdev_wait_for_examine", 00:05:46.745 "bdev_set_options", 00:05:46.745 "accel_get_stats", 00:05:46.745 "accel_set_options", 00:05:46.745 "accel_set_driver", 00:05:46.745 "accel_crypto_key_destroy", 00:05:46.745 "accel_crypto_keys_get", 00:05:46.745 "accel_crypto_key_create", 00:05:46.745 "accel_assign_opc", 00:05:46.745 "accel_get_module_info", 00:05:46.745 "accel_get_opc_assignments", 00:05:46.745 "vmd_rescan", 00:05:46.745 "vmd_remove_device", 00:05:46.745 "vmd_enable", 00:05:46.745 "sock_get_default_impl", 00:05:46.745 "sock_set_default_impl", 00:05:46.745 "sock_impl_set_options", 00:05:46.745 "sock_impl_get_options", 00:05:46.745 "iobuf_get_stats", 00:05:46.745 "iobuf_set_options", 00:05:46.745 "keyring_get_keys", 00:05:46.745 "framework_get_pci_devices", 00:05:46.745 "framework_get_config", 00:05:46.745 "framework_get_subsystems", 00:05:46.745 "fsdev_set_opts", 00:05:46.745 "fsdev_get_opts", 00:05:46.745 "trace_get_info", 00:05:46.745 "trace_get_tpoint_group_mask", 00:05:46.745 "trace_disable_tpoint_group", 00:05:46.745 "trace_enable_tpoint_group", 00:05:46.745 "trace_clear_tpoint_mask", 00:05:46.745 "trace_set_tpoint_mask", 00:05:46.745 "notify_get_notifications", 00:05:46.745 "notify_get_types", 00:05:46.745 "spdk_get_version", 00:05:46.745 "rpc_get_methods" 00:05:46.745 ] 00:05:46.745 07:37:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:46.745 07:37:36 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.745 07:37:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.005 07:37:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:47.005 07:37:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57803 00:05:47.005 07:37:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57803 ']' 00:05:47.005 07:37:36 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57803 00:05:47.005 07:37:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:47.005 07:37:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.005 07:37:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57803 00:05:47.005 killing process with pid 57803 00:05:47.005 07:37:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.005 07:37:36 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.005 07:37:36 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57803' 00:05:47.005 07:37:36 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57803 00:05:47.005 07:37:36 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57803 00:05:49.544 ************************************ 00:05:49.544 END TEST spdkcli_tcp 00:05:49.544 ************************************ 00:05:49.544 00:05:49.544 real 0m4.035s 00:05:49.544 user 0m7.212s 00:05:49.544 sys 0m0.601s 00:05:49.544 07:37:39 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.544 07:37:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.544 07:37:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.544 07:37:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.544 07:37:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.544 07:37:39 -- common/autotest_common.sh@10 -- # set +x 00:05:49.544 ************************************ 00:05:49.544 START TEST dpdk_mem_utility 00:05:49.544 ************************************ 00:05:49.544 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.544 * Looking for test storage... 00:05:49.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:49.544 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.544 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.544 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.544 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.545 07:37:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:49.545 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.545 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.545 --rc genhtml_branch_coverage=1 00:05:49.545 --rc genhtml_function_coverage=1 00:05:49.545 --rc genhtml_legend=1 00:05:49.545 --rc geninfo_all_blocks=1 00:05:49.545 --rc geninfo_unexecuted_blocks=1 00:05:49.545 00:05:49.545 ' 00:05:49.545 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.545 --rc genhtml_branch_coverage=1 00:05:49.545 --rc genhtml_function_coverage=1 00:05:49.545 --rc genhtml_legend=1 00:05:49.545 --rc geninfo_all_blocks=1 00:05:49.545 --rc geninfo_unexecuted_blocks=1 00:05:49.545 00:05:49.545 ' 00:05:49.545 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.545 --rc genhtml_branch_coverage=1 00:05:49.545 --rc genhtml_function_coverage=1 00:05:49.545 --rc genhtml_legend=1 00:05:49.545 --rc geninfo_all_blocks=1 00:05:49.545 --rc geninfo_unexecuted_blocks=1 00:05:49.545 00:05:49.545 ' 00:05:49.545 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.545 --rc genhtml_branch_coverage=1 00:05:49.545 --rc genhtml_function_coverage=1 00:05:49.545 --rc genhtml_legend=1 00:05:49.545 --rc geninfo_all_blocks=1 00:05:49.545 --rc geninfo_unexecuted_blocks=1 00:05:49.545 00:05:49.545 ' 00:05:49.545 07:37:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:49.545 07:37:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57921 00:05:49.545 07:37:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.545 07:37:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57921 00:05:49.545 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57921 ']' 00:05:49.545 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.545 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.545 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.545 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.545 07:37:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.545 [2024-11-29 07:37:39.423212] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:49.545 [2024-11-29 07:37:39.423323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57921 ] 00:05:49.805 [2024-11-29 07:37:39.597331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.806 [2024-11-29 07:37:39.702733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.748 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.748 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:50.748 07:37:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:50.748 07:37:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:50.748 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.748 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.748 { 00:05:50.748 "filename": "/tmp/spdk_mem_dump.txt" 00:05:50.748 } 00:05:50.748 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.748 07:37:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:50.748 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:50.748 1 heaps totaling size 824.000000 MiB 00:05:50.748 size: 824.000000 MiB heap id: 0 00:05:50.748 end heaps---------- 00:05:50.748 9 mempools totaling size 603.782043 MiB 00:05:50.748 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:50.748 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:50.748 size: 100.555481 MiB name: bdev_io_57921 00:05:50.748 size: 50.003479 MiB name: msgpool_57921 00:05:50.748 size: 36.509338 MiB name: fsdev_io_57921 00:05:50.748 size: 21.763794 MiB name: PDU_Pool 00:05:50.748 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:50.748 size: 4.133484 MiB name: evtpool_57921 00:05:50.748 size: 0.026123 MiB name: Session_Pool 00:05:50.748 end mempools------- 00:05:50.748 6 memzones totaling size 4.142822 MiB 00:05:50.748 size: 1.000366 MiB name: RG_ring_0_57921 00:05:50.748 size: 1.000366 MiB name: RG_ring_1_57921 00:05:50.748 size: 1.000366 MiB name: RG_ring_4_57921 00:05:50.748 size: 1.000366 MiB name: RG_ring_5_57921 00:05:50.748 size: 0.125366 MiB name: RG_ring_2_57921 00:05:50.748 size: 0.015991 MiB name: RG_ring_3_57921 00:05:50.748 end memzones------- 00:05:50.748 07:37:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:50.748 heap id: 0 total size: 824.000000 MiB number of busy elements: 321 number of free elements: 18 00:05:50.748 list of free elements. size: 16.779907 MiB 00:05:50.748 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:50.748 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:50.748 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:50.748 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:50.748 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:50.748 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:50.748 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:50.748 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:50.748 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:50.748 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:50.748 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:50.748 element at address: 0x20001b400000 with size: 0.561462 MiB 00:05:50.748 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:50.748 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:50.748 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:50.748 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:50.748 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:50.748 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:50.748 list of standard malloc elements. size: 199.289185 MiB 00:05:50.748 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:50.748 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:50.748 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:50.748 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:50.748 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:50.748 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:50.748 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:50.748 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:50.748 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:50.748 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:50.748 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:50.748 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:50.748 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:50.748 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:50.749 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:50.750 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:50.750 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:50.750 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:50.750 list of memzone associated elements. size: 607.930908 MiB 00:05:50.750 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:50.750 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:50.750 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:50.750 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:50.750 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:50.750 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57921_0 00:05:50.750 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:50.750 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57921_0 00:05:50.750 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:50.750 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57921_0 00:05:50.750 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:50.750 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:50.750 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:50.750 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:50.750 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:50.750 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57921_0 00:05:50.750 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:50.750 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57921 00:05:50.750 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:50.750 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57921 00:05:50.750 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:50.750 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:50.750 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:50.750 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:50.750 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:50.750 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:50.750 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:50.750 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:50.750 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:50.750 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57921 00:05:50.750 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:50.750 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57921 00:05:50.750 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:50.750 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57921 00:05:50.750 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:50.750 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57921 00:05:50.750 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:50.750 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57921 00:05:50.750 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:50.750 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57921 00:05:50.750 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:50.750 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:50.750 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:50.750 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:50.750 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:50.750 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:50.750 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:50.750 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57921 00:05:50.750 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:50.750 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57921 00:05:50.750 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:50.750 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:50.750 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:50.750 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:50.750 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:50.750 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57921 00:05:50.750 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:50.750 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:50.750 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:50.751 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57921 00:05:50.751 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:50.751 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57921 00:05:50.751 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:50.751 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57921 00:05:50.751 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:50.751 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:50.751 07:37:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:50.751 07:37:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57921 00:05:50.751 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57921 ']' 00:05:50.751 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57921 00:05:50.751 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:50.751 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.751 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57921 00:05:50.751 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.751 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.751 killing process with pid 57921 00:05:50.751 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57921' 00:05:50.751 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57921 00:05:50.751 07:37:40 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57921 00:05:53.290 00:05:53.290 real 0m3.847s 00:05:53.290 user 0m3.764s 00:05:53.290 sys 0m0.535s 00:05:53.290 07:37:42 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.290 07:37:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.290 ************************************ 00:05:53.290 END TEST dpdk_mem_utility 00:05:53.290 ************************************ 00:05:53.290 07:37:43 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:53.290 07:37:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.290 07:37:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.290 07:37:43 -- common/autotest_common.sh@10 -- # set +x 00:05:53.290 ************************************ 00:05:53.290 START TEST event 00:05:53.290 ************************************ 00:05:53.290 07:37:43 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:53.290 * Looking for test storage... 00:05:53.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:53.290 07:37:43 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.290 07:37:43 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.290 07:37:43 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.290 07:37:43 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.290 07:37:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.290 07:37:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.290 07:37:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.290 07:37:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.290 07:37:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.290 07:37:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.290 07:37:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.290 07:37:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.290 07:37:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.290 07:37:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.290 07:37:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.290 07:37:43 event -- scripts/common.sh@344 -- # case "$op" in 00:05:53.290 07:37:43 event -- scripts/common.sh@345 -- # : 1 00:05:53.290 07:37:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.290 07:37:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.290 07:37:43 event -- scripts/common.sh@365 -- # decimal 1 00:05:53.290 07:37:43 event -- scripts/common.sh@353 -- # local d=1 00:05:53.290 07:37:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.290 07:37:43 event -- scripts/common.sh@355 -- # echo 1 00:05:53.290 07:37:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.550 07:37:43 event -- scripts/common.sh@366 -- # decimal 2 00:05:53.550 07:37:43 event -- scripts/common.sh@353 -- # local d=2 00:05:53.550 07:37:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.550 07:37:43 event -- scripts/common.sh@355 -- # echo 2 00:05:53.550 07:37:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.550 07:37:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.550 07:37:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.550 07:37:43 event -- scripts/common.sh@368 -- # return 0 00:05:53.550 07:37:43 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.550 07:37:43 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.550 --rc genhtml_branch_coverage=1 00:05:53.550 --rc genhtml_function_coverage=1 00:05:53.550 --rc genhtml_legend=1 00:05:53.550 --rc geninfo_all_blocks=1 00:05:53.550 --rc geninfo_unexecuted_blocks=1 00:05:53.550 00:05:53.550 ' 00:05:53.550 07:37:43 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.550 --rc genhtml_branch_coverage=1 00:05:53.550 --rc genhtml_function_coverage=1 00:05:53.550 --rc genhtml_legend=1 00:05:53.550 --rc geninfo_all_blocks=1 00:05:53.550 --rc geninfo_unexecuted_blocks=1 00:05:53.550 00:05:53.550 ' 00:05:53.550 07:37:43 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.550 --rc genhtml_branch_coverage=1 00:05:53.550 --rc genhtml_function_coverage=1 00:05:53.550 --rc genhtml_legend=1 00:05:53.550 --rc geninfo_all_blocks=1 00:05:53.550 --rc geninfo_unexecuted_blocks=1 00:05:53.550 00:05:53.550 ' 00:05:53.550 07:37:43 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.550 --rc genhtml_branch_coverage=1 00:05:53.550 --rc genhtml_function_coverage=1 00:05:53.550 --rc genhtml_legend=1 00:05:53.550 --rc geninfo_all_blocks=1 00:05:53.550 --rc geninfo_unexecuted_blocks=1 00:05:53.550 00:05:53.550 ' 00:05:53.550 07:37:43 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:53.550 07:37:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:53.550 07:37:43 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.550 07:37:43 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:53.550 07:37:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.550 07:37:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.550 ************************************ 00:05:53.550 START TEST event_perf 00:05:53.550 ************************************ 00:05:53.550 07:37:43 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.550 Running I/O for 1 seconds...[2024-11-29 07:37:43.304314] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:53.550 [2024-11-29 07:37:43.304410] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58029 ] 00:05:53.550 [2024-11-29 07:37:43.474212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.809 Running I/O for 1 seconds...[2024-11-29 07:37:43.591472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.809 [2024-11-29 07:37:43.591643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.809 [2024-11-29 07:37:43.591806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.809 [2024-11-29 07:37:43.591841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.190 00:05:55.190 lcore 0: 214702 00:05:55.190 lcore 1: 214701 00:05:55.190 lcore 2: 214702 00:05:55.190 lcore 3: 214700 00:05:55.190 done. 00:05:55.190 00:05:55.190 real 0m1.573s 00:05:55.190 user 0m4.348s 00:05:55.190 sys 0m0.106s 00:05:55.190 07:37:44 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.190 07:37:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.190 ************************************ 00:05:55.190 END TEST event_perf 00:05:55.190 ************************************ 00:05:55.190 07:37:44 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:55.190 07:37:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:55.190 07:37:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.190 07:37:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.190 ************************************ 00:05:55.190 START TEST event_reactor 00:05:55.190 ************************************ 00:05:55.190 07:37:44 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:55.190 [2024-11-29 07:37:44.943078] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:55.190 [2024-11-29 07:37:44.943191] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58068 ] 00:05:55.190 [2024-11-29 07:37:45.103790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.449 [2024-11-29 07:37:45.212165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.830 test_start 00:05:56.830 oneshot 00:05:56.830 tick 100 00:05:56.830 tick 100 00:05:56.830 tick 250 00:05:56.830 tick 100 00:05:56.830 tick 100 00:05:56.830 tick 100 00:05:56.830 tick 250 00:05:56.830 tick 500 00:05:56.830 tick 100 00:05:56.830 tick 100 00:05:56.830 tick 250 00:05:56.830 tick 100 00:05:56.830 tick 100 00:05:56.830 test_end 00:05:56.830 00:05:56.830 real 0m1.532s 00:05:56.830 user 0m1.348s 00:05:56.830 sys 0m0.077s 00:05:56.830 07:37:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.830 07:37:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:56.830 ************************************ 00:05:56.830 END TEST event_reactor 00:05:56.830 ************************************ 00:05:56.830 07:37:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.830 07:37:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:56.830 07:37:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.830 07:37:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.830 ************************************ 00:05:56.830 START TEST event_reactor_perf 00:05:56.830 ************************************ 00:05:56.830 07:37:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.830 [2024-11-29 07:37:46.548057] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:56.830 [2024-11-29 07:37:46.548171] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58105 ] 00:05:56.830 [2024-11-29 07:37:46.717073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.091 [2024-11-29 07:37:46.825896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.474 test_start 00:05:58.474 test_end 00:05:58.474 Performance: 395720 events per second 00:05:58.474 00:05:58.474 real 0m1.548s 00:05:58.474 user 0m1.351s 00:05:58.474 sys 0m0.089s 00:05:58.474 07:37:48 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.474 07:37:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.474 ************************************ 00:05:58.474 END TEST event_reactor_perf 00:05:58.474 ************************************ 00:05:58.474 07:37:48 event -- event/event.sh@49 -- # uname -s 00:05:58.474 07:37:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:58.474 07:37:48 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:58.474 07:37:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.474 07:37:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.474 07:37:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.474 ************************************ 00:05:58.474 START TEST event_scheduler 00:05:58.474 ************************************ 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:58.474 * Looking for test storage... 00:05:58.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.474 07:37:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.474 --rc genhtml_branch_coverage=1 00:05:58.474 --rc genhtml_function_coverage=1 00:05:58.474 --rc genhtml_legend=1 00:05:58.474 --rc geninfo_all_blocks=1 00:05:58.474 --rc geninfo_unexecuted_blocks=1 00:05:58.474 00:05:58.474 ' 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.474 --rc genhtml_branch_coverage=1 00:05:58.474 --rc genhtml_function_coverage=1 00:05:58.474 --rc genhtml_legend=1 00:05:58.474 --rc geninfo_all_blocks=1 00:05:58.474 --rc geninfo_unexecuted_blocks=1 00:05:58.474 00:05:58.474 ' 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.474 --rc genhtml_branch_coverage=1 00:05:58.474 --rc genhtml_function_coverage=1 00:05:58.474 --rc genhtml_legend=1 00:05:58.474 --rc geninfo_all_blocks=1 00:05:58.474 --rc geninfo_unexecuted_blocks=1 00:05:58.474 00:05:58.474 ' 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.474 --rc genhtml_branch_coverage=1 00:05:58.474 --rc genhtml_function_coverage=1 00:05:58.474 --rc genhtml_legend=1 00:05:58.474 --rc geninfo_all_blocks=1 00:05:58.474 --rc geninfo_unexecuted_blocks=1 00:05:58.474 00:05:58.474 ' 00:05:58.474 07:37:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:58.474 07:37:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58181 00:05:58.474 07:37:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:58.474 07:37:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.474 07:37:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58181 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58181 ']' 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.474 07:37:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.734 [2024-11-29 07:37:48.422834] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:58.734 [2024-11-29 07:37:48.422962] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58181 ] 00:05:58.734 [2024-11-29 07:37:48.597065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.994 [2024-11-29 07:37:48.709259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.994 [2024-11-29 07:37:48.709421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.994 [2024-11-29 07:37:48.709932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.994 [2024-11-29 07:37:48.709971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.564 07:37:49 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.564 07:37:49 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:59.564 07:37:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:59.564 07:37:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.564 07:37:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.564 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.564 POWER: Cannot set governor of lcore 0 to userspace 00:05:59.564 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.564 POWER: Cannot set governor of lcore 0 to performance 00:05:59.564 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.564 POWER: Cannot set governor of lcore 0 to userspace 00:05:59.564 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.564 POWER: Cannot set governor of lcore 0 to userspace 00:05:59.564 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:59.564 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:59.564 POWER: Unable to set Power Management Environment for lcore 0 00:05:59.564 [2024-11-29 07:37:49.270657] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:59.564 [2024-11-29 07:37:49.270706] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:59.564 [2024-11-29 07:37:49.270742] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:59.564 [2024-11-29 07:37:49.270782] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:59.564 [2024-11-29 07:37:49.270814] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:59.564 [2024-11-29 07:37:49.270846] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:59.564 07:37:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.564 07:37:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:59.564 07:37:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.564 07:37:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.824 [2024-11-29 07:37:49.586613] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:59.824 07:37:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.824 07:37:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:59.824 07:37:49 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.824 07:37:49 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.824 07:37:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.824 ************************************ 00:05:59.824 START TEST scheduler_create_thread 00:05:59.824 ************************************ 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.824 2 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.824 3 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.824 4 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.824 5 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.824 07:37:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.825 6 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.825 7 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.825 8 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.825 9 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.825 10 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.825 07:37:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.205 07:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.205 07:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:01.205 07:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:01.205 07:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.205 07:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.143 07:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.143 07:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:02.143 07:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.143 07:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.082 07:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.082 07:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:03.082 07:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:03.082 07:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.082 07:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.650 ************************************ 00:06:03.650 END TEST scheduler_create_thread 00:06:03.650 ************************************ 00:06:03.650 07:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.650 00:06:03.650 real 0m3.884s 00:06:03.650 user 0m0.028s 00:06:03.650 sys 0m0.009s 00:06:03.650 07:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.650 07:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.650 07:37:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:03.650 07:37:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58181 00:06:03.650 07:37:53 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58181 ']' 00:06:03.650 07:37:53 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58181 00:06:03.650 07:37:53 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:03.650 07:37:53 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.650 07:37:53 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58181 00:06:03.650 killing process with pid 58181 00:06:03.650 07:37:53 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:03.650 07:37:53 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:03.650 07:37:53 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58181' 00:06:03.650 07:37:53 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58181 00:06:03.650 07:37:53 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58181 00:06:04.219 [2024-11-29 07:37:53.863596] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:05.158 00:06:05.158 real 0m6.876s 00:06:05.158 user 0m14.275s 00:06:05.158 sys 0m0.520s 00:06:05.158 07:37:54 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.158 07:37:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.158 ************************************ 00:06:05.158 END TEST event_scheduler 00:06:05.158 ************************************ 00:06:05.158 07:37:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:05.158 07:37:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:05.158 07:37:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.158 07:37:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.158 07:37:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.158 ************************************ 00:06:05.158 START TEST app_repeat 00:06:05.158 ************************************ 00:06:05.158 07:37:55 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58298 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.158 Process app_repeat pid: 58298 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58298' 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.158 spdk_app_start Round 0 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:05.158 07:37:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58298 /var/tmp/spdk-nbd.sock 00:06:05.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.158 07:37:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58298 ']' 00:06:05.158 07:37:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.158 07:37:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.158 07:37:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.158 07:37:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.158 07:37:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.418 [2024-11-29 07:37:55.133441] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:05.418 [2024-11-29 07:37:55.133557] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58298 ] 00:06:05.418 [2024-11-29 07:37:55.301370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.677 [2024-11-29 07:37:55.414474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.677 [2024-11-29 07:37:55.414508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.247 07:37:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.247 07:37:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:06.247 07:37:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.507 Malloc0 00:06:06.507 07:37:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.773 Malloc1 00:06:06.773 07:37:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.773 07:37:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.773 07:37:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.773 07:37:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.774 07:37:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.774 07:37:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.774 07:37:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.774 07:37:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.774 07:37:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.774 07:37:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.774 07:37:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.774 07:37:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.774 07:37:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.774 07:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.774 07:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.774 07:37:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.043 /dev/nbd0 00:06:07.043 07:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.043 07:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.043 07:37:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:07.043 07:37:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.043 07:37:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.043 07:37:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.043 07:37:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:07.043 07:37:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.043 07:37:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.043 07:37:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.043 07:37:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.043 1+0 records in 00:06:07.043 1+0 records out 00:06:07.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359984 s, 11.4 MB/s 00:06:07.043 07:37:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.044 07:37:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.044 07:37:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.044 07:37:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.044 07:37:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.044 07:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.044 07:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.044 07:37:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.044 /dev/nbd1 00:06:07.303 07:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.303 07:37:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.303 1+0 records in 00:06:07.303 1+0 records out 00:06:07.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407491 s, 10.1 MB/s 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.303 07:37:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.303 07:37:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.303 07:37:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.303 07:37:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.303 07:37:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.303 07:37:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.303 07:37:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.303 { 00:06:07.303 "nbd_device": "/dev/nbd0", 00:06:07.303 "bdev_name": "Malloc0" 00:06:07.303 }, 00:06:07.303 { 00:06:07.303 "nbd_device": "/dev/nbd1", 00:06:07.303 "bdev_name": "Malloc1" 00:06:07.304 } 00:06:07.304 ]' 00:06:07.304 07:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.304 { 00:06:07.304 "nbd_device": "/dev/nbd0", 00:06:07.304 "bdev_name": "Malloc0" 00:06:07.304 }, 00:06:07.304 { 00:06:07.304 "nbd_device": "/dev/nbd1", 00:06:07.304 "bdev_name": "Malloc1" 00:06:07.304 } 00:06:07.304 ]' 00:06:07.304 07:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.564 /dev/nbd1' 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.564 /dev/nbd1' 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.564 256+0 records in 00:06:07.564 256+0 records out 00:06:07.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124517 s, 84.2 MB/s 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.564 256+0 records in 00:06:07.564 256+0 records out 00:06:07.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178243 s, 58.8 MB/s 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.564 256+0 records in 00:06:07.564 256+0 records out 00:06:07.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256874 s, 40.8 MB/s 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.564 07:37:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.824 07:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.824 07:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.824 07:37:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.824 07:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.824 07:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.824 07:37:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.824 07:37:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.824 07:37:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.824 07:37:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.824 07:37:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.084 07:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.345 07:37:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.345 07:37:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.345 07:37:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.345 07:37:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.345 07:37:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.345 07:37:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.345 07:37:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.345 07:37:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.345 07:37:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.345 07:37:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.605 07:37:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.983 [2024-11-29 07:37:59.549387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.983 [2024-11-29 07:37:59.651900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.983 [2024-11-29 07:37:59.651903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.983 [2024-11-29 07:37:59.836024] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.983 [2024-11-29 07:37:59.836125] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.890 spdk_app_start Round 1 00:06:11.890 07:38:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.890 07:38:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:11.890 07:38:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58298 /var/tmp/spdk-nbd.sock 00:06:11.890 07:38:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58298 ']' 00:06:11.890 07:38:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.890 07:38:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.890 07:38:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.890 07:38:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.890 07:38:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.890 07:38:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.890 07:38:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:11.890 07:38:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.149 Malloc0 00:06:12.149 07:38:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.408 Malloc1 00:06:12.408 07:38:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.408 /dev/nbd0 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.408 07:38:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.408 07:38:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:12.408 07:38:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.409 07:38:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.409 07:38:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.409 07:38:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:12.409 07:38:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.409 07:38:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.409 07:38:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.409 07:38:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.668 1+0 records in 00:06:12.668 1+0 records out 00:06:12.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396475 s, 10.3 MB/s 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.668 07:38:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.668 07:38:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.668 07:38:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.668 /dev/nbd1 00:06:12.668 07:38:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.668 07:38:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.668 1+0 records in 00:06:12.668 1+0 records out 00:06:12.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209581 s, 19.5 MB/s 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.668 07:38:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.668 07:38:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.668 07:38:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.668 07:38:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.668 07:38:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.928 07:38:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.928 07:38:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.928 { 00:06:12.928 "nbd_device": "/dev/nbd0", 00:06:12.928 "bdev_name": "Malloc0" 00:06:12.928 }, 00:06:12.928 { 00:06:12.928 "nbd_device": "/dev/nbd1", 00:06:12.928 "bdev_name": "Malloc1" 00:06:12.928 } 00:06:12.928 ]' 00:06:12.928 07:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.928 { 00:06:12.928 "nbd_device": "/dev/nbd0", 00:06:12.928 "bdev_name": "Malloc0" 00:06:12.928 }, 00:06:12.928 { 00:06:12.928 "nbd_device": "/dev/nbd1", 00:06:12.928 "bdev_name": "Malloc1" 00:06:12.928 } 00:06:12.928 ]' 00:06:12.928 07:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.928 07:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.928 /dev/nbd1' 00:06:12.928 07:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.928 /dev/nbd1' 00:06:12.928 07:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.928 07:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.928 07:38:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.188 256+0 records in 00:06:13.188 256+0 records out 00:06:13.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138828 s, 75.5 MB/s 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.188 256+0 records in 00:06:13.188 256+0 records out 00:06:13.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203775 s, 51.5 MB/s 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.188 256+0 records in 00:06:13.188 256+0 records out 00:06:13.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024684 s, 42.5 MB/s 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.188 07:38:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.448 07:38:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.708 07:38:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.708 07:38:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.278 07:38:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.216 [2024-11-29 07:38:05.136371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.475 [2024-11-29 07:38:05.239574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.475 [2024-11-29 07:38:05.239600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.735 [2024-11-29 07:38:05.422303] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.735 [2024-11-29 07:38:05.422388] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.122 07:38:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.122 spdk_app_start Round 2 00:06:17.122 07:38:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:17.122 07:38:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58298 /var/tmp/spdk-nbd.sock 00:06:17.122 07:38:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58298 ']' 00:06:17.122 07:38:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.122 07:38:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.122 07:38:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.122 07:38:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.122 07:38:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.403 07:38:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.403 07:38:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.403 07:38:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.663 Malloc0 00:06:17.663 07:38:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.922 Malloc1 00:06:17.922 07:38:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.922 07:38:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.922 07:38:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.922 07:38:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.922 07:38:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.922 07:38:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.922 07:38:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.922 07:38:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.922 07:38:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.922 07:38:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.922 07:38:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.922 07:38:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.922 07:38:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.923 07:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.923 07:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.923 07:38:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.182 /dev/nbd0 00:06:18.182 07:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.182 07:38:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.182 1+0 records in 00:06:18.182 1+0 records out 00:06:18.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433459 s, 9.4 MB/s 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.182 07:38:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.182 07:38:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.182 07:38:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.182 07:38:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.441 /dev/nbd1 00:06:18.441 07:38:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.441 07:38:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.441 1+0 records in 00:06:18.441 1+0 records out 00:06:18.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358255 s, 11.4 MB/s 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.441 07:38:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.441 07:38:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.441 07:38:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.441 07:38:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.441 07:38:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.441 07:38:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.701 { 00:06:18.701 "nbd_device": "/dev/nbd0", 00:06:18.701 "bdev_name": "Malloc0" 00:06:18.701 }, 00:06:18.701 { 00:06:18.701 "nbd_device": "/dev/nbd1", 00:06:18.701 "bdev_name": "Malloc1" 00:06:18.701 } 00:06:18.701 ]' 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.701 { 00:06:18.701 "nbd_device": "/dev/nbd0", 00:06:18.701 "bdev_name": "Malloc0" 00:06:18.701 }, 00:06:18.701 { 00:06:18.701 "nbd_device": "/dev/nbd1", 00:06:18.701 "bdev_name": "Malloc1" 00:06:18.701 } 00:06:18.701 ]' 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.701 /dev/nbd1' 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.701 /dev/nbd1' 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.701 256+0 records in 00:06:18.701 256+0 records out 00:06:18.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506642 s, 207 MB/s 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.701 256+0 records in 00:06:18.701 256+0 records out 00:06:18.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196983 s, 53.2 MB/s 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.701 256+0 records in 00:06:18.701 256+0 records out 00:06:18.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252657 s, 41.5 MB/s 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.701 07:38:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.960 07:38:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.960 07:38:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.960 07:38:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.960 07:38:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.960 07:38:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.960 07:38:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.960 07:38:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.960 07:38:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.960 07:38:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.960 07:38:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.219 07:38:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.219 07:38:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.219 07:38:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.219 07:38:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.219 07:38:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.219 07:38:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.219 07:38:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.219 07:38:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.219 07:38:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.219 07:38:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.219 07:38:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.479 07:38:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.479 07:38:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.479 07:38:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.479 07:38:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.479 07:38:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.479 07:38:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.479 07:38:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.479 07:38:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.479 07:38:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.479 07:38:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.479 07:38:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.479 07:38:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.479 07:38:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.738 07:38:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.118 [2024-11-29 07:38:10.771298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.118 [2024-11-29 07:38:10.875920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.118 [2024-11-29 07:38:10.875924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.118 [2024-11-29 07:38:11.061552] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.118 [2024-11-29 07:38:11.061622] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.027 07:38:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58298 /var/tmp/spdk-nbd.sock 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58298 ']' 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:23.027 07:38:12 event.app_repeat -- event/event.sh@39 -- # killprocess 58298 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58298 ']' 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58298 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58298 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.027 killing process with pid 58298 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58298' 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58298 00:06:23.027 07:38:12 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58298 00:06:23.967 spdk_app_start is called in Round 0. 00:06:23.967 Shutdown signal received, stop current app iteration 00:06:23.967 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:06:23.967 spdk_app_start is called in Round 1. 00:06:23.967 Shutdown signal received, stop current app iteration 00:06:23.967 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:06:23.967 spdk_app_start is called in Round 2. 00:06:23.967 Shutdown signal received, stop current app iteration 00:06:23.967 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:06:23.967 spdk_app_start is called in Round 3. 00:06:23.968 Shutdown signal received, stop current app iteration 00:06:24.227 07:38:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:24.227 07:38:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:24.227 00:06:24.227 real 0m18.858s 00:06:24.227 user 0m40.326s 00:06:24.227 sys 0m2.666s 00:06:24.227 07:38:13 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.227 07:38:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.227 ************************************ 00:06:24.227 END TEST app_repeat 00:06:24.227 ************************************ 00:06:24.227 07:38:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:24.227 07:38:13 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:24.227 07:38:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.227 07:38:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.227 07:38:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.227 ************************************ 00:06:24.227 START TEST cpu_locks 00:06:24.227 ************************************ 00:06:24.227 07:38:13 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:24.227 * Looking for test storage... 00:06:24.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:24.227 07:38:14 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.227 07:38:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.227 07:38:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.488 07:38:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.488 07:38:14 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:24.488 07:38:14 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.488 07:38:14 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.488 --rc genhtml_branch_coverage=1 00:06:24.488 --rc genhtml_function_coverage=1 00:06:24.488 --rc genhtml_legend=1 00:06:24.488 --rc geninfo_all_blocks=1 00:06:24.488 --rc geninfo_unexecuted_blocks=1 00:06:24.488 00:06:24.488 ' 00:06:24.488 07:38:14 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.488 --rc genhtml_branch_coverage=1 00:06:24.488 --rc genhtml_function_coverage=1 00:06:24.488 --rc genhtml_legend=1 00:06:24.488 --rc geninfo_all_blocks=1 00:06:24.488 --rc geninfo_unexecuted_blocks=1 00:06:24.488 00:06:24.488 ' 00:06:24.488 07:38:14 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.488 --rc genhtml_branch_coverage=1 00:06:24.488 --rc genhtml_function_coverage=1 00:06:24.488 --rc genhtml_legend=1 00:06:24.488 --rc geninfo_all_blocks=1 00:06:24.488 --rc geninfo_unexecuted_blocks=1 00:06:24.488 00:06:24.488 ' 00:06:24.488 07:38:14 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.488 --rc genhtml_branch_coverage=1 00:06:24.488 --rc genhtml_function_coverage=1 00:06:24.488 --rc genhtml_legend=1 00:06:24.488 --rc geninfo_all_blocks=1 00:06:24.488 --rc geninfo_unexecuted_blocks=1 00:06:24.488 00:06:24.488 ' 00:06:24.488 07:38:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:24.488 07:38:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:24.488 07:38:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:24.488 07:38:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:24.488 07:38:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.488 07:38:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.488 07:38:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.488 ************************************ 00:06:24.488 START TEST default_locks 00:06:24.488 ************************************ 00:06:24.488 07:38:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:24.488 07:38:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58740 00:06:24.488 07:38:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.488 07:38:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58740 00:06:24.488 07:38:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58740 ']' 00:06:24.488 07:38:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.488 07:38:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.488 07:38:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.488 07:38:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.488 07:38:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.488 [2024-11-29 07:38:14.322651] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:24.488 [2024-11-29 07:38:14.322773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58740 ] 00:06:24.749 [2024-11-29 07:38:14.495714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.749 [2024-11-29 07:38:14.600565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.687 07:38:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.687 07:38:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:25.687 07:38:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58740 00:06:25.687 07:38:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58740 00:06:25.687 07:38:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.947 07:38:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58740 00:06:25.947 07:38:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58740 ']' 00:06:25.947 07:38:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58740 00:06:25.947 07:38:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:25.947 07:38:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.947 07:38:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58740 00:06:25.947 07:38:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.947 07:38:15 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.947 killing process with pid 58740 00:06:25.947 07:38:15 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58740' 00:06:25.947 07:38:15 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58740 00:06:25.947 07:38:15 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58740 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58740 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58740 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58740 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58740 ']' 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.487 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58740) - No such process 00:06:28.487 ERROR: process (pid: 58740) is no longer running 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.487 00:06:28.487 real 0m3.887s 00:06:28.487 user 0m3.839s 00:06:28.487 sys 0m0.614s 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.487 07:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.487 ************************************ 00:06:28.487 END TEST default_locks 00:06:28.487 ************************************ 00:06:28.488 07:38:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:28.488 07:38:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.488 07:38:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.488 07:38:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.488 ************************************ 00:06:28.488 START TEST default_locks_via_rpc 00:06:28.488 ************************************ 00:06:28.488 07:38:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:28.488 07:38:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58815 00:06:28.488 07:38:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.488 07:38:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58815 00:06:28.488 07:38:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58815 ']' 00:06:28.488 07:38:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.488 07:38:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.488 07:38:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.488 07:38:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.488 07:38:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.488 [2024-11-29 07:38:18.273650] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:28.488 [2024-11-29 07:38:18.273769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58815 ] 00:06:28.747 [2024-11-29 07:38:18.438426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.747 [2024-11-29 07:38:18.545978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58815 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58815 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58815 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58815 ']' 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58815 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.720 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58815 00:06:29.980 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.980 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.980 killing process with pid 58815 00:06:29.980 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58815' 00:06:29.980 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58815 00:06:29.980 07:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58815 00:06:32.519 00:06:32.519 real 0m3.768s 00:06:32.519 user 0m3.696s 00:06:32.519 sys 0m0.568s 00:06:32.519 07:38:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.519 07:38:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.519 ************************************ 00:06:32.519 END TEST default_locks_via_rpc 00:06:32.519 ************************************ 00:06:32.519 07:38:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:32.519 07:38:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.519 07:38:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.519 07:38:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.519 ************************************ 00:06:32.519 START TEST non_locking_app_on_locked_coremask 00:06:32.519 ************************************ 00:06:32.519 07:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:32.519 07:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58883 00:06:32.519 07:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.519 07:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58883 /var/tmp/spdk.sock 00:06:32.519 07:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58883 ']' 00:06:32.519 07:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.519 07:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.519 07:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.519 07:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.520 07:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.520 [2024-11-29 07:38:22.108124] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:32.520 [2024-11-29 07:38:22.108233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58883 ] 00:06:32.520 [2024-11-29 07:38:22.283962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.520 [2024-11-29 07:38:22.388331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.460 07:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.460 07:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.460 07:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58905 00:06:33.460 07:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:33.460 07:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58905 /var/tmp/spdk2.sock 00:06:33.460 07:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58905 ']' 00:06:33.460 07:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.460 07:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.460 07:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.460 07:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.460 07:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.460 [2024-11-29 07:38:23.297311] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:33.460 [2024-11-29 07:38:23.297518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58905 ] 00:06:33.720 [2024-11-29 07:38:23.464030] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.720 [2024-11-29 07:38:23.464076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.979 [2024-11-29 07:38:23.679578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.894 07:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.894 07:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:35.894 07:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58883 00:06:35.894 07:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58883 00:06:35.894 07:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.465 07:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58883 00:06:36.465 07:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58883 ']' 00:06:36.465 07:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58883 00:06:36.465 07:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:36.465 07:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.465 07:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58883 00:06:36.465 killing process with pid 58883 00:06:36.465 07:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.465 07:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.465 07:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58883' 00:06:36.465 07:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58883 00:06:36.465 07:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58883 00:06:41.746 07:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58905 00:06:41.746 07:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58905 ']' 00:06:41.746 07:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58905 00:06:41.746 07:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.746 07:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.746 07:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58905 00:06:41.746 killing process with pid 58905 00:06:41.746 07:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.746 07:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.746 07:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58905' 00:06:41.746 07:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58905 00:06:41.746 07:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58905 00:06:43.656 ************************************ 00:06:43.656 END TEST non_locking_app_on_locked_coremask 00:06:43.656 ************************************ 00:06:43.656 00:06:43.656 real 0m11.141s 00:06:43.656 user 0m11.389s 00:06:43.656 sys 0m1.202s 00:06:43.656 07:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.656 07:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.656 07:38:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:43.656 07:38:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.656 07:38:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.656 07:38:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.656 ************************************ 00:06:43.656 START TEST locking_app_on_unlocked_coremask 00:06:43.656 ************************************ 00:06:43.656 07:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:43.656 07:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59044 00:06:43.656 07:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:43.656 07:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59044 /var/tmp/spdk.sock 00:06:43.656 07:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59044 ']' 00:06:43.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.656 07:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.656 07:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.656 07:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.656 07:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.656 07:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.656 [2024-11-29 07:38:33.318905] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:43.656 [2024-11-29 07:38:33.319033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59044 ] 00:06:43.656 [2024-11-29 07:38:33.489010] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.656 [2024-11-29 07:38:33.489136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.656 [2024-11-29 07:38:33.595050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.596 07:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.596 07:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.596 07:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:44.596 07:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59066 00:06:44.596 07:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59066 /var/tmp/spdk2.sock 00:06:44.596 07:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59066 ']' 00:06:44.596 07:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.596 07:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.596 07:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.596 07:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.596 07:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.596 [2024-11-29 07:38:34.484225] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:44.596 [2024-11-29 07:38:34.484417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59066 ] 00:06:44.856 [2024-11-29 07:38:34.651744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.116 [2024-11-29 07:38:34.865681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59066 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59066 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59044 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59044 ']' 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59044 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59044 00:06:47.708 killing process with pid 59044 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59044' 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59044 00:06:47.708 07:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59044 00:06:53.006 07:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59066 00:06:53.006 07:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59066 ']' 00:06:53.006 07:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59066 00:06:53.006 07:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:53.006 07:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.006 07:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59066 00:06:53.006 killing process with pid 59066 00:06:53.006 07:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.006 07:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.006 07:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59066' 00:06:53.006 07:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59066 00:06:53.006 07:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59066 00:06:54.386 00:06:54.386 real 0m11.103s 00:06:54.386 user 0m11.320s 00:06:54.386 sys 0m1.166s 00:06:54.386 07:38:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.386 07:38:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.386 ************************************ 00:06:54.386 END TEST locking_app_on_unlocked_coremask 00:06:54.386 ************************************ 00:06:54.645 07:38:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:54.645 07:38:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.645 07:38:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.645 07:38:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.645 ************************************ 00:06:54.645 START TEST locking_app_on_locked_coremask 00:06:54.645 ************************************ 00:06:54.645 07:38:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:54.645 07:38:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59209 00:06:54.645 07:38:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.645 07:38:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59209 /var/tmp/spdk.sock 00:06:54.645 07:38:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59209 ']' 00:06:54.645 07:38:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.645 07:38:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.645 07:38:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.645 07:38:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.645 07:38:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.645 [2024-11-29 07:38:44.486748] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:54.645 [2024-11-29 07:38:44.486943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59209 ] 00:06:54.904 [2024-11-29 07:38:44.645735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.904 [2024-11-29 07:38:44.756483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59225 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59225 /var/tmp/spdk2.sock 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59225 /var/tmp/spdk2.sock 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59225 /var/tmp/spdk2.sock 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59225 ']' 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.840 07:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.840 [2024-11-29 07:38:45.675940] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:55.840 [2024-11-29 07:38:45.676157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59225 ] 00:06:56.098 [2024-11-29 07:38:45.842393] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59209 has claimed it. 00:06:56.098 [2024-11-29 07:38:45.842464] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:56.357 ERROR: process (pid: 59225) is no longer running 00:06:56.357 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59225) - No such process 00:06:56.357 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.357 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:56.357 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:56.357 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.357 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.357 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.357 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59209 00:06:56.357 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59209 00:06:56.357 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.926 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59209 00:06:56.926 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59209 ']' 00:06:56.926 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59209 00:06:56.926 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:56.926 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.926 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59209 00:06:56.926 killing process with pid 59209 00:06:56.926 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.926 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.926 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59209' 00:06:56.926 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59209 00:06:56.926 07:38:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59209 00:06:59.492 ************************************ 00:06:59.492 END TEST locking_app_on_locked_coremask 00:06:59.492 ************************************ 00:06:59.492 00:06:59.492 real 0m4.693s 00:06:59.492 user 0m4.868s 00:06:59.492 sys 0m0.775s 00:06:59.492 07:38:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.492 07:38:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.492 07:38:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:59.492 07:38:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.492 07:38:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.492 07:38:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.492 ************************************ 00:06:59.492 START TEST locking_overlapped_coremask 00:06:59.492 ************************************ 00:06:59.492 07:38:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:59.492 07:38:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59299 00:06:59.492 07:38:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:59.492 07:38:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59299 /var/tmp/spdk.sock 00:06:59.492 07:38:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59299 ']' 00:06:59.492 07:38:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.492 07:38:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.493 07:38:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.493 07:38:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.493 07:38:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.493 [2024-11-29 07:38:49.245527] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:59.493 [2024-11-29 07:38:49.245653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59299 ] 00:06:59.493 [2024-11-29 07:38:49.419328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.751 [2024-11-29 07:38:49.533837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.752 [2024-11-29 07:38:49.533973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.752 [2024-11-29 07:38:49.534012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59321 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59321 /var/tmp/spdk2.sock 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59321 /var/tmp/spdk2.sock 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59321 /var/tmp/spdk2.sock 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59321 ']' 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.688 07:38:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.688 [2024-11-29 07:38:50.496961] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:00.688 [2024-11-29 07:38:50.497183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59321 ] 00:07:00.947 [2024-11-29 07:38:50.664857] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59299 has claimed it. 00:07:00.947 [2024-11-29 07:38:50.664935] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:01.205 ERROR: process (pid: 59321) is no longer running 00:07:01.205 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59321) - No such process 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59299 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59299 ']' 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59299 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.205 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59299 00:07:01.464 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.464 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.464 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59299' 00:07:01.464 killing process with pid 59299 00:07:01.464 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59299 00:07:01.464 07:38:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59299 00:07:04.000 00:07:04.000 real 0m4.401s 00:07:04.000 user 0m11.968s 00:07:04.000 sys 0m0.590s 00:07:04.000 07:38:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.000 07:38:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.000 ************************************ 00:07:04.000 END TEST locking_overlapped_coremask 00:07:04.000 ************************************ 00:07:04.000 07:38:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:04.000 07:38:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.000 07:38:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.000 07:38:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.000 ************************************ 00:07:04.000 START TEST locking_overlapped_coremask_via_rpc 00:07:04.000 ************************************ 00:07:04.000 07:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:04.000 07:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59386 00:07:04.000 07:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:04.000 07:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59386 /var/tmp/spdk.sock 00:07:04.000 07:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59386 ']' 00:07:04.000 07:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.000 07:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.000 07:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.000 07:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.000 07:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.000 [2024-11-29 07:38:53.707572] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:04.000 [2024-11-29 07:38:53.707692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59386 ] 00:07:04.000 [2024-11-29 07:38:53.879942] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.000 [2024-11-29 07:38:53.880077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.260 [2024-11-29 07:38:53.996333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.260 [2024-11-29 07:38:53.996478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.260 [2024-11-29 07:38:53.996514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.200 07:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.200 07:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:05.200 07:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59404 00:07:05.200 07:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:05.200 07:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59404 /var/tmp/spdk2.sock 00:07:05.200 07:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59404 ']' 00:07:05.200 07:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.200 07:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.200 07:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.200 07:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.200 07:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.200 [2024-11-29 07:38:54.967477] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:05.200 [2024-11-29 07:38:54.967688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59404 ] 00:07:05.200 [2024-11-29 07:38:55.134874] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.200 [2024-11-29 07:38:55.134948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.460 [2024-11-29 07:38:55.371009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.460 [2024-11-29 07:38:55.374285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.460 [2024-11-29 07:38:55.374322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.993 [2024-11-29 07:38:57.545287] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59386 has claimed it. 00:07:07.993 request: 00:07:07.993 { 00:07:07.993 "method": "framework_enable_cpumask_locks", 00:07:07.993 "req_id": 1 00:07:07.993 } 00:07:07.993 Got JSON-RPC error response 00:07:07.993 response: 00:07:07.993 { 00:07:07.993 "code": -32603, 00:07:07.993 "message": "Failed to claim CPU core: 2" 00:07:07.993 } 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59386 /var/tmp/spdk.sock 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59386 ']' 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59404 /var/tmp/spdk2.sock 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59404 ']' 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.993 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.252 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.252 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:08.252 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:08.252 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.252 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.252 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.252 00:07:08.252 real 0m4.374s 00:07:08.252 user 0m1.284s 00:07:08.252 sys 0m0.187s 00:07:08.252 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.252 07:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.252 ************************************ 00:07:08.252 END TEST locking_overlapped_coremask_via_rpc 00:07:08.252 ************************************ 00:07:08.252 07:38:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:08.252 07:38:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59386 ]] 00:07:08.252 07:38:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59386 00:07:08.252 07:38:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59386 ']' 00:07:08.252 07:38:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59386 00:07:08.252 07:38:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:08.252 07:38:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.252 07:38:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59386 00:07:08.252 07:38:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.252 killing process with pid 59386 00:07:08.252 07:38:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.252 07:38:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59386' 00:07:08.252 07:38:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59386 00:07:08.252 07:38:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59386 00:07:10.784 07:39:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59404 ]] 00:07:10.784 07:39:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59404 00:07:10.784 07:39:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59404 ']' 00:07:10.784 07:39:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59404 00:07:10.784 07:39:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:10.784 07:39:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.784 07:39:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59404 00:07:10.784 killing process with pid 59404 00:07:10.784 07:39:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:10.784 07:39:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:10.784 07:39:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59404' 00:07:10.784 07:39:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59404 00:07:10.784 07:39:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59404 00:07:13.321 07:39:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:13.321 07:39:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:13.321 Process with pid 59386 is not found 00:07:13.321 07:39:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59386 ]] 00:07:13.322 07:39:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59386 00:07:13.322 07:39:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59386 ']' 00:07:13.322 07:39:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59386 00:07:13.322 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59386) - No such process 00:07:13.322 07:39:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59386 is not found' 00:07:13.322 07:39:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59404 ]] 00:07:13.322 07:39:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59404 00:07:13.322 07:39:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59404 ']' 00:07:13.322 07:39:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59404 00:07:13.322 Process with pid 59404 is not found 00:07:13.322 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59404) - No such process 00:07:13.322 07:39:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59404 is not found' 00:07:13.322 07:39:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:13.322 ************************************ 00:07:13.322 END TEST cpu_locks 00:07:13.322 ************************************ 00:07:13.322 00:07:13.322 real 0m48.955s 00:07:13.322 user 1m24.507s 00:07:13.322 sys 0m6.283s 00:07:13.322 07:39:02 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.322 07:39:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.322 ************************************ 00:07:13.322 END TEST event 00:07:13.322 ************************************ 00:07:13.322 00:07:13.322 real 1m19.978s 00:07:13.322 user 2m26.405s 00:07:13.322 sys 0m10.131s 00:07:13.322 07:39:03 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.322 07:39:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.322 07:39:03 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:13.322 07:39:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.322 07:39:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.322 07:39:03 -- common/autotest_common.sh@10 -- # set +x 00:07:13.322 ************************************ 00:07:13.322 START TEST thread 00:07:13.322 ************************************ 00:07:13.322 07:39:03 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:13.322 * Looking for test storage... 00:07:13.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:13.322 07:39:03 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.322 07:39:03 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.322 07:39:03 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.322 07:39:03 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.322 07:39:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.322 07:39:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.322 07:39:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.322 07:39:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.581 07:39:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.581 07:39:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.581 07:39:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.581 07:39:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.581 07:39:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.581 07:39:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.581 07:39:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.581 07:39:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:13.581 07:39:03 thread -- scripts/common.sh@345 -- # : 1 00:07:13.581 07:39:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.581 07:39:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.581 07:39:03 thread -- scripts/common.sh@365 -- # decimal 1 00:07:13.581 07:39:03 thread -- scripts/common.sh@353 -- # local d=1 00:07:13.581 07:39:03 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.581 07:39:03 thread -- scripts/common.sh@355 -- # echo 1 00:07:13.581 07:39:03 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.581 07:39:03 thread -- scripts/common.sh@366 -- # decimal 2 00:07:13.581 07:39:03 thread -- scripts/common.sh@353 -- # local d=2 00:07:13.581 07:39:03 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.581 07:39:03 thread -- scripts/common.sh@355 -- # echo 2 00:07:13.581 07:39:03 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.581 07:39:03 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.581 07:39:03 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.581 07:39:03 thread -- scripts/common.sh@368 -- # return 0 00:07:13.581 07:39:03 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.581 07:39:03 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.581 --rc genhtml_branch_coverage=1 00:07:13.581 --rc genhtml_function_coverage=1 00:07:13.581 --rc genhtml_legend=1 00:07:13.581 --rc geninfo_all_blocks=1 00:07:13.581 --rc geninfo_unexecuted_blocks=1 00:07:13.581 00:07:13.581 ' 00:07:13.581 07:39:03 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.581 --rc genhtml_branch_coverage=1 00:07:13.581 --rc genhtml_function_coverage=1 00:07:13.581 --rc genhtml_legend=1 00:07:13.581 --rc geninfo_all_blocks=1 00:07:13.581 --rc geninfo_unexecuted_blocks=1 00:07:13.581 00:07:13.581 ' 00:07:13.581 07:39:03 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.581 --rc genhtml_branch_coverage=1 00:07:13.581 --rc genhtml_function_coverage=1 00:07:13.581 --rc genhtml_legend=1 00:07:13.581 --rc geninfo_all_blocks=1 00:07:13.581 --rc geninfo_unexecuted_blocks=1 00:07:13.581 00:07:13.581 ' 00:07:13.581 07:39:03 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.581 --rc genhtml_branch_coverage=1 00:07:13.581 --rc genhtml_function_coverage=1 00:07:13.581 --rc genhtml_legend=1 00:07:13.581 --rc geninfo_all_blocks=1 00:07:13.581 --rc geninfo_unexecuted_blocks=1 00:07:13.581 00:07:13.581 ' 00:07:13.581 07:39:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:13.581 07:39:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:13.581 07:39:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.581 07:39:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.581 ************************************ 00:07:13.581 START TEST thread_poller_perf 00:07:13.581 ************************************ 00:07:13.581 07:39:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:13.581 [2024-11-29 07:39:03.335642] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:13.581 [2024-11-29 07:39:03.335828] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59605 ] 00:07:13.581 [2024-11-29 07:39:03.520167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.841 [2024-11-29 07:39:03.627151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.841 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:15.249 [2024-11-29T07:39:05.194Z] ====================================== 00:07:15.249 [2024-11-29T07:39:05.194Z] busy:2301026670 (cyc) 00:07:15.249 [2024-11-29T07:39:05.194Z] total_run_count: 411000 00:07:15.249 [2024-11-29T07:39:05.194Z] tsc_hz: 2290000000 (cyc) 00:07:15.249 [2024-11-29T07:39:05.194Z] ====================================== 00:07:15.249 [2024-11-29T07:39:05.194Z] poller_cost: 5598 (cyc), 2444 (nsec) 00:07:15.249 00:07:15.249 real 0m1.560s 00:07:15.249 user 0m1.350s 00:07:15.249 sys 0m0.103s 00:07:15.249 07:39:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.249 07:39:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.249 ************************************ 00:07:15.249 END TEST thread_poller_perf 00:07:15.249 ************************************ 00:07:15.249 07:39:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:15.249 07:39:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:15.249 07:39:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.249 07:39:04 thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.249 ************************************ 00:07:15.249 START TEST thread_poller_perf 00:07:15.249 ************************************ 00:07:15.249 07:39:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:15.249 [2024-11-29 07:39:04.971138] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:15.249 [2024-11-29 07:39:04.971289] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59636 ] 00:07:15.249 [2024-11-29 07:39:05.144043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.509 [2024-11-29 07:39:05.257046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.509 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:16.885 [2024-11-29T07:39:06.830Z] ====================================== 00:07:16.885 [2024-11-29T07:39:06.830Z] busy:2293545112 (cyc) 00:07:16.885 [2024-11-29T07:39:06.830Z] total_run_count: 5474000 00:07:16.885 [2024-11-29T07:39:06.830Z] tsc_hz: 2290000000 (cyc) 00:07:16.885 [2024-11-29T07:39:06.830Z] ====================================== 00:07:16.885 [2024-11-29T07:39:06.830Z] poller_cost: 418 (cyc), 182 (nsec) 00:07:16.885 00:07:16.885 real 0m1.556s 00:07:16.885 user 0m1.365s 00:07:16.885 sys 0m0.084s 00:07:16.885 07:39:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.885 07:39:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:16.885 ************************************ 00:07:16.885 END TEST thread_poller_perf 00:07:16.885 ************************************ 00:07:16.885 07:39:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:16.885 ************************************ 00:07:16.885 END TEST thread 00:07:16.885 ************************************ 00:07:16.885 00:07:16.885 real 0m3.468s 00:07:16.885 user 0m2.896s 00:07:16.885 sys 0m0.369s 00:07:16.885 07:39:06 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.885 07:39:06 thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.885 07:39:06 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:16.885 07:39:06 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:16.885 07:39:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.885 07:39:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.885 07:39:06 -- common/autotest_common.sh@10 -- # set +x 00:07:16.885 ************************************ 00:07:16.885 START TEST app_cmdline 00:07:16.885 ************************************ 00:07:16.885 07:39:06 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:16.885 * Looking for test storage... 00:07:16.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:16.885 07:39:06 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.885 07:39:06 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.885 07:39:06 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.885 07:39:06 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.885 07:39:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.886 07:39:06 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:16.886 07:39:06 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.886 07:39:06 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.886 --rc genhtml_branch_coverage=1 00:07:16.886 --rc genhtml_function_coverage=1 00:07:16.886 --rc genhtml_legend=1 00:07:16.886 --rc geninfo_all_blocks=1 00:07:16.886 --rc geninfo_unexecuted_blocks=1 00:07:16.886 00:07:16.886 ' 00:07:16.886 07:39:06 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.886 --rc genhtml_branch_coverage=1 00:07:16.886 --rc genhtml_function_coverage=1 00:07:16.886 --rc genhtml_legend=1 00:07:16.886 --rc geninfo_all_blocks=1 00:07:16.886 --rc geninfo_unexecuted_blocks=1 00:07:16.886 00:07:16.886 ' 00:07:16.886 07:39:06 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.886 --rc genhtml_branch_coverage=1 00:07:16.886 --rc genhtml_function_coverage=1 00:07:16.886 --rc genhtml_legend=1 00:07:16.886 --rc geninfo_all_blocks=1 00:07:16.886 --rc geninfo_unexecuted_blocks=1 00:07:16.886 00:07:16.886 ' 00:07:16.886 07:39:06 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.886 --rc genhtml_branch_coverage=1 00:07:16.886 --rc genhtml_function_coverage=1 00:07:16.886 --rc genhtml_legend=1 00:07:16.886 --rc geninfo_all_blocks=1 00:07:16.886 --rc geninfo_unexecuted_blocks=1 00:07:16.886 00:07:16.886 ' 00:07:16.886 07:39:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:16.886 07:39:06 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:16.886 07:39:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59725 00:07:16.886 07:39:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59725 00:07:16.886 07:39:06 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59725 ']' 00:07:16.886 07:39:06 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.886 07:39:06 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.886 07:39:06 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.886 07:39:06 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.886 07:39:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.144 [2024-11-29 07:39:06.895824] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:17.144 [2024-11-29 07:39:06.896023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59725 ] 00:07:17.144 [2024-11-29 07:39:07.066164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.401 [2024-11-29 07:39:07.179149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:18.336 07:39:08 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:18.336 { 00:07:18.336 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:07:18.336 "fields": { 00:07:18.336 "major": 25, 00:07:18.336 "minor": 1, 00:07:18.336 "patch": 0, 00:07:18.336 "suffix": "-pre", 00:07:18.336 "commit": "35cd3e84d" 00:07:18.336 } 00:07:18.336 } 00:07:18.336 07:39:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:18.336 07:39:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:18.336 07:39:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:18.336 07:39:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:18.336 07:39:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:18.336 07:39:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:18.336 07:39:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.336 07:39:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:18.336 07:39:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:18.336 07:39:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:18.336 07:39:08 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:18.595 request: 00:07:18.595 { 00:07:18.595 "method": "env_dpdk_get_mem_stats", 00:07:18.595 "req_id": 1 00:07:18.595 } 00:07:18.595 Got JSON-RPC error response 00:07:18.595 response: 00:07:18.595 { 00:07:18.595 "code": -32601, 00:07:18.595 "message": "Method not found" 00:07:18.595 } 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.595 07:39:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59725 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59725 ']' 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59725 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59725 00:07:18.595 killing process with pid 59725 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59725' 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@973 -- # kill 59725 00:07:18.595 07:39:08 app_cmdline -- common/autotest_common.sh@978 -- # wait 59725 00:07:21.134 00:07:21.134 real 0m4.209s 00:07:21.134 user 0m4.402s 00:07:21.134 sys 0m0.586s 00:07:21.134 07:39:10 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.134 ************************************ 00:07:21.134 END TEST app_cmdline 00:07:21.134 ************************************ 00:07:21.134 07:39:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.134 07:39:10 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:21.135 07:39:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.135 07:39:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.135 07:39:10 -- common/autotest_common.sh@10 -- # set +x 00:07:21.135 ************************************ 00:07:21.135 START TEST version 00:07:21.135 ************************************ 00:07:21.135 07:39:10 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:21.135 * Looking for test storage... 00:07:21.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:21.135 07:39:10 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.135 07:39:10 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.135 07:39:10 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.135 07:39:11 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.135 07:39:11 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.135 07:39:11 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.135 07:39:11 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.135 07:39:11 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.135 07:39:11 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.135 07:39:11 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.135 07:39:11 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.135 07:39:11 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.135 07:39:11 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.135 07:39:11 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.135 07:39:11 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.135 07:39:11 version -- scripts/common.sh@344 -- # case "$op" in 00:07:21.135 07:39:11 version -- scripts/common.sh@345 -- # : 1 00:07:21.135 07:39:11 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.135 07:39:11 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.135 07:39:11 version -- scripts/common.sh@365 -- # decimal 1 00:07:21.135 07:39:11 version -- scripts/common.sh@353 -- # local d=1 00:07:21.135 07:39:11 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.135 07:39:11 version -- scripts/common.sh@355 -- # echo 1 00:07:21.135 07:39:11 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.135 07:39:11 version -- scripts/common.sh@366 -- # decimal 2 00:07:21.135 07:39:11 version -- scripts/common.sh@353 -- # local d=2 00:07:21.135 07:39:11 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.135 07:39:11 version -- scripts/common.sh@355 -- # echo 2 00:07:21.394 07:39:11 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.394 07:39:11 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.394 07:39:11 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.394 07:39:11 version -- scripts/common.sh@368 -- # return 0 00:07:21.394 07:39:11 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.394 07:39:11 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.394 --rc genhtml_branch_coverage=1 00:07:21.394 --rc genhtml_function_coverage=1 00:07:21.394 --rc genhtml_legend=1 00:07:21.394 --rc geninfo_all_blocks=1 00:07:21.394 --rc geninfo_unexecuted_blocks=1 00:07:21.394 00:07:21.394 ' 00:07:21.394 07:39:11 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.394 --rc genhtml_branch_coverage=1 00:07:21.394 --rc genhtml_function_coverage=1 00:07:21.394 --rc genhtml_legend=1 00:07:21.394 --rc geninfo_all_blocks=1 00:07:21.394 --rc geninfo_unexecuted_blocks=1 00:07:21.394 00:07:21.394 ' 00:07:21.394 07:39:11 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.394 --rc genhtml_branch_coverage=1 00:07:21.394 --rc genhtml_function_coverage=1 00:07:21.394 --rc genhtml_legend=1 00:07:21.394 --rc geninfo_all_blocks=1 00:07:21.394 --rc geninfo_unexecuted_blocks=1 00:07:21.394 00:07:21.394 ' 00:07:21.394 07:39:11 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.394 --rc genhtml_branch_coverage=1 00:07:21.394 --rc genhtml_function_coverage=1 00:07:21.394 --rc genhtml_legend=1 00:07:21.394 --rc geninfo_all_blocks=1 00:07:21.394 --rc geninfo_unexecuted_blocks=1 00:07:21.394 00:07:21.394 ' 00:07:21.394 07:39:11 version -- app/version.sh@17 -- # get_header_version major 00:07:21.394 07:39:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.394 07:39:11 version -- app/version.sh@14 -- # cut -f2 00:07:21.394 07:39:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.394 07:39:11 version -- app/version.sh@17 -- # major=25 00:07:21.394 07:39:11 version -- app/version.sh@18 -- # get_header_version minor 00:07:21.394 07:39:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.394 07:39:11 version -- app/version.sh@14 -- # cut -f2 00:07:21.394 07:39:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.394 07:39:11 version -- app/version.sh@18 -- # minor=1 00:07:21.394 07:39:11 version -- app/version.sh@19 -- # get_header_version patch 00:07:21.394 07:39:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.394 07:39:11 version -- app/version.sh@14 -- # cut -f2 00:07:21.394 07:39:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.394 07:39:11 version -- app/version.sh@19 -- # patch=0 00:07:21.394 07:39:11 version -- app/version.sh@20 -- # get_header_version suffix 00:07:21.394 07:39:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.394 07:39:11 version -- app/version.sh@14 -- # cut -f2 00:07:21.394 07:39:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.394 07:39:11 version -- app/version.sh@20 -- # suffix=-pre 00:07:21.394 07:39:11 version -- app/version.sh@22 -- # version=25.1 00:07:21.394 07:39:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:21.394 07:39:11 version -- app/version.sh@28 -- # version=25.1rc0 00:07:21.394 07:39:11 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:21.394 07:39:11 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:21.394 07:39:11 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:21.394 07:39:11 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:21.394 00:07:21.394 real 0m0.311s 00:07:21.394 user 0m0.189s 00:07:21.394 sys 0m0.178s 00:07:21.394 ************************************ 00:07:21.394 END TEST version 00:07:21.394 ************************************ 00:07:21.394 07:39:11 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.394 07:39:11 version -- common/autotest_common.sh@10 -- # set +x 00:07:21.394 07:39:11 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:21.394 07:39:11 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:21.394 07:39:11 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:21.394 07:39:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.394 07:39:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.394 07:39:11 -- common/autotest_common.sh@10 -- # set +x 00:07:21.394 ************************************ 00:07:21.394 START TEST bdev_raid 00:07:21.394 ************************************ 00:07:21.394 07:39:11 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:21.394 * Looking for test storage... 00:07:21.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:21.654 07:39:11 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.654 07:39:11 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.654 07:39:11 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.654 07:39:11 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.654 07:39:11 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:21.655 07:39:11 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.655 07:39:11 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.655 07:39:11 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.655 07:39:11 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:21.655 07:39:11 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.655 07:39:11 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.655 --rc genhtml_branch_coverage=1 00:07:21.655 --rc genhtml_function_coverage=1 00:07:21.655 --rc genhtml_legend=1 00:07:21.655 --rc geninfo_all_blocks=1 00:07:21.655 --rc geninfo_unexecuted_blocks=1 00:07:21.655 00:07:21.655 ' 00:07:21.655 07:39:11 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.655 --rc genhtml_branch_coverage=1 00:07:21.655 --rc genhtml_function_coverage=1 00:07:21.655 --rc genhtml_legend=1 00:07:21.655 --rc geninfo_all_blocks=1 00:07:21.655 --rc geninfo_unexecuted_blocks=1 00:07:21.655 00:07:21.655 ' 00:07:21.655 07:39:11 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.655 --rc genhtml_branch_coverage=1 00:07:21.655 --rc genhtml_function_coverage=1 00:07:21.655 --rc genhtml_legend=1 00:07:21.655 --rc geninfo_all_blocks=1 00:07:21.655 --rc geninfo_unexecuted_blocks=1 00:07:21.655 00:07:21.655 ' 00:07:21.655 07:39:11 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.655 --rc genhtml_branch_coverage=1 00:07:21.655 --rc genhtml_function_coverage=1 00:07:21.655 --rc genhtml_legend=1 00:07:21.655 --rc geninfo_all_blocks=1 00:07:21.655 --rc geninfo_unexecuted_blocks=1 00:07:21.655 00:07:21.655 ' 00:07:21.655 07:39:11 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:21.655 07:39:11 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:21.655 07:39:11 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:21.655 07:39:11 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:21.655 07:39:11 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:21.655 07:39:11 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:21.655 07:39:11 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:21.655 07:39:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.655 07:39:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.655 07:39:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.655 ************************************ 00:07:21.655 START TEST raid1_resize_data_offset_test 00:07:21.655 ************************************ 00:07:21.655 Process raid pid: 59913 00:07:21.655 07:39:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:21.655 07:39:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59913 00:07:21.655 07:39:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59913' 00:07:21.655 07:39:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.655 07:39:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59913 00:07:21.655 07:39:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59913 ']' 00:07:21.655 07:39:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.655 07:39:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.655 07:39:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.655 07:39:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.655 07:39:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.655 [2024-11-29 07:39:11.556245] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:21.655 [2024-11-29 07:39:11.556873] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.915 [2024-11-29 07:39:11.733150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.915 [2024-11-29 07:39:11.842563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.175 [2024-11-29 07:39:12.044057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.175 [2024-11-29 07:39:12.044207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.435 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.435 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:22.435 07:39:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:22.435 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.435 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.696 malloc0 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.696 malloc1 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.696 null0 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.696 [2024-11-29 07:39:12.549655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:22.696 [2024-11-29 07:39:12.551475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:22.696 [2024-11-29 07:39:12.551567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:22.696 [2024-11-29 07:39:12.551731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:22.696 [2024-11-29 07:39:12.551779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:22.696 [2024-11-29 07:39:12.552047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:22.696 [2024-11-29 07:39:12.552248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:22.696 [2024-11-29 07:39:12.552266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:22.696 [2024-11-29 07:39:12.552426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.696 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.697 [2024-11-29 07:39:12.605543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.697 07:39:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.270 malloc2 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.270 [2024-11-29 07:39:13.134875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:23.270 [2024-11-29 07:39:13.151832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.270 [2024-11-29 07:39:13.153654] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59913 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59913 ']' 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59913 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.270 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59913 00:07:23.530 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.530 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.530 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59913' 00:07:23.530 killing process with pid 59913 00:07:23.530 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59913 00:07:23.530 07:39:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59913 00:07:23.530 [2024-11-29 07:39:13.244380] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.530 [2024-11-29 07:39:13.246137] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:23.530 [2024-11-29 07:39:13.246190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.530 [2024-11-29 07:39:13.246207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:23.530 [2024-11-29 07:39:13.280571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.530 [2024-11-29 07:39:13.280905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.530 [2024-11-29 07:39:13.280925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:25.439 [2024-11-29 07:39:14.987375] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.378 07:39:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:26.378 00:07:26.378 real 0m4.606s 00:07:26.378 user 0m4.503s 00:07:26.378 sys 0m0.516s 00:07:26.378 ************************************ 00:07:26.378 END TEST raid1_resize_data_offset_test 00:07:26.378 ************************************ 00:07:26.378 07:39:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.378 07:39:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.378 07:39:16 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:26.378 07:39:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.378 07:39:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.378 07:39:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.378 ************************************ 00:07:26.378 START TEST raid0_resize_superblock_test 00:07:26.378 ************************************ 00:07:26.378 07:39:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:26.378 07:39:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:26.378 07:39:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59996 00:07:26.378 07:39:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59996' 00:07:26.378 07:39:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:26.378 Process raid pid: 59996 00:07:26.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.378 07:39:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59996 00:07:26.378 07:39:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59996 ']' 00:07:26.378 07:39:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.378 07:39:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.378 07:39:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.378 07:39:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.378 07:39:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.378 [2024-11-29 07:39:16.223582] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:26.378 [2024-11-29 07:39:16.223694] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.639 [2024-11-29 07:39:16.380187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.639 [2024-11-29 07:39:16.489618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.899 [2024-11-29 07:39:16.688390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.899 [2024-11-29 07:39:16.688487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.160 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.160 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:27.160 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:27.160 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.160 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.730 malloc0 00:07:27.730 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.730 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:27.730 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.730 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.730 [2024-11-29 07:39:17.569451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:27.730 [2024-11-29 07:39:17.569508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.730 [2024-11-29 07:39:17.569529] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:27.730 [2024-11-29 07:39:17.569540] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.730 [2024-11-29 07:39:17.571613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.730 [2024-11-29 07:39:17.571720] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:27.730 pt0 00:07:27.730 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.730 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:27.730 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.730 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.730 4f2ba360-d7f8-4939-a9ba-14e1c360716c 00:07:27.730 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.730 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:27.730 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.730 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.990 3b067340-05e9-4f66-8c13-189df13bbf67 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.990 ede44b36-5ad3-49c3-be6d-84a6d52d8694 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.990 [2024-11-29 07:39:17.702016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3b067340-05e9-4f66-8c13-189df13bbf67 is claimed 00:07:27.990 [2024-11-29 07:39:17.702118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ede44b36-5ad3-49c3-be6d-84a6d52d8694 is claimed 00:07:27.990 [2024-11-29 07:39:17.702258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:27.990 [2024-11-29 07:39:17.702272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:27.990 [2024-11-29 07:39:17.702548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:27.990 [2024-11-29 07:39:17.702750] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:27.990 [2024-11-29 07:39:17.702761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:27.990 [2024-11-29 07:39:17.702903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:27.990 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 [2024-11-29 07:39:17.814019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 [2024-11-29 07:39:17.841918] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:27.991 [2024-11-29 07:39:17.841942] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3b067340-05e9-4f66-8c13-189df13bbf67' was resized: old size 131072, new size 204800 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 [2024-11-29 07:39:17.853842] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:27.991 [2024-11-29 07:39:17.853863] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ede44b36-5ad3-49c3-be6d-84a6d52d8694' was resized: old size 131072, new size 204800 00:07:27.991 [2024-11-29 07:39:17.853888] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.251 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:28.251 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:28.251 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:28.251 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.251 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:28.251 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:28.251 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.251 [2024-11-29 07:39:17.961752] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.251 07:39:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.251 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:28.251 07:39:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:28.251 07:39:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:28.251 07:39:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:28.251 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.251 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.251 [2024-11-29 07:39:18.029469] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:28.251 [2024-11-29 07:39:18.029572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:28.251 [2024-11-29 07:39:18.029604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.251 [2024-11-29 07:39:18.029636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:28.251 [2024-11-29 07:39:18.029765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.251 [2024-11-29 07:39:18.029816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.251 [2024-11-29 07:39:18.029863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:28.251 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.251 07:39:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:28.251 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.251 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.251 [2024-11-29 07:39:18.041381] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:28.251 [2024-11-29 07:39:18.041428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.251 [2024-11-29 07:39:18.041446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:28.251 [2024-11-29 07:39:18.041456] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.251 [2024-11-29 07:39:18.043571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.252 [2024-11-29 07:39:18.043665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:28.252 [2024-11-29 07:39:18.045367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3b067340-05e9-4f66-8c13-189df13bbf67 00:07:28.252 [2024-11-29 07:39:18.045438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3b067340-05e9-4f66-8c13-189df13bbf67 is claimed 00:07:28.252 [2024-11-29 07:39:18.045557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ede44b36-5ad3-49c3-be6d-84a6d52d8694 00:07:28.252 [2024-11-29 07:39:18.045575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ede44b36-5ad3-49c3-be6d-84a6d52d8694 is claimed 00:07:28.252 [2024-11-29 07:39:18.045753] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ede44b36-5ad3-49c3-be6d-84a6d52d8694 (2) smaller than existing raid bdev Raid (3) 00:07:28.252 [2024-11-29 07:39:18.045778] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 3b067340-05e9-4f66-8c13-189df13bbf67: File exists 00:07:28.252 [2024-11-29 07:39:18.045814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:28.252 [2024-11-29 07:39:18.045825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:28.252 [2024-11-29 07:39:18.046071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:28.252 pt0 00:07:28.252 [2024-11-29 07:39:18.046235] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:28.252 [2024-11-29 07:39:18.046249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:28.252 [2024-11-29 07:39:18.046402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.252 [2024-11-29 07:39:18.069592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59996 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59996 ']' 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59996 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59996 00:07:28.252 killing process with pid 59996 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59996' 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59996 00:07:28.252 [2024-11-29 07:39:18.146869] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.252 [2024-11-29 07:39:18.146927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.252 [2024-11-29 07:39:18.146963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.252 [2024-11-29 07:39:18.146974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:28.252 07:39:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59996 00:07:29.633 [2024-11-29 07:39:19.500716] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.014 07:39:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:31.014 00:07:31.014 real 0m4.446s 00:07:31.014 user 0m4.662s 00:07:31.014 sys 0m0.540s 00:07:31.014 07:39:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.014 07:39:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.014 ************************************ 00:07:31.014 END TEST raid0_resize_superblock_test 00:07:31.014 ************************************ 00:07:31.014 07:39:20 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:31.014 07:39:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:31.014 07:39:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.014 07:39:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.014 ************************************ 00:07:31.014 START TEST raid1_resize_superblock_test 00:07:31.014 ************************************ 00:07:31.014 07:39:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:31.014 07:39:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:31.014 07:39:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60089 00:07:31.014 07:39:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.014 07:39:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60089' 00:07:31.014 Process raid pid: 60089 00:07:31.014 07:39:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60089 00:07:31.014 07:39:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60089 ']' 00:07:31.014 07:39:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.014 07:39:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.014 07:39:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.014 07:39:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.014 07:39:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.014 [2024-11-29 07:39:20.735523] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:31.014 [2024-11-29 07:39:20.735703] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.014 [2024-11-29 07:39:20.889863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.274 [2024-11-29 07:39:20.997248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.274 [2024-11-29 07:39:21.192137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.274 [2024-11-29 07:39:21.192219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.844 07:39:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.844 07:39:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:31.844 07:39:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:31.844 07:39:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.844 07:39:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.414 malloc0 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.414 [2024-11-29 07:39:22.089833] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:32.414 [2024-11-29 07:39:22.089905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.414 [2024-11-29 07:39:22.089943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:32.414 [2024-11-29 07:39:22.089954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.414 [2024-11-29 07:39:22.092172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.414 [2024-11-29 07:39:22.092248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:32.414 pt0 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.414 cea6ca2b-9e13-4511-8fe5-c1e9a578175d 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.414 ba8bd6b7-6e96-4b4b-9f00-a087e32d4d80 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.414 59936881-9c5b-4453-a5d1-1c0e226e5943 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.414 [2024-11-29 07:39:22.223275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ba8bd6b7-6e96-4b4b-9f00-a087e32d4d80 is claimed 00:07:32.414 [2024-11-29 07:39:22.223388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 59936881-9c5b-4453-a5d1-1c0e226e5943 is claimed 00:07:32.414 [2024-11-29 07:39:22.223522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:32.414 [2024-11-29 07:39:22.223537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:32.414 [2024-11-29 07:39:22.223783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:32.414 [2024-11-29 07:39:22.223969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:32.414 [2024-11-29 07:39:22.223980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:32.414 [2024-11-29 07:39:22.224148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:32.414 [2024-11-29 07:39:22.331322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.414 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.675 [2024-11-29 07:39:22.383197] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:32.675 [2024-11-29 07:39:22.383262] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ba8bd6b7-6e96-4b4b-9f00-a087e32d4d80' was resized: old size 131072, new size 204800 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.675 [2024-11-29 07:39:22.395082] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:32.675 [2024-11-29 07:39:22.395159] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '59936881-9c5b-4453-a5d1-1c0e226e5943' was resized: old size 131072, new size 204800 00:07:32.675 [2024-11-29 07:39:22.395213] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:32.675 [2024-11-29 07:39:22.502975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.675 [2024-11-29 07:39:22.550715] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:32.675 [2024-11-29 07:39:22.550781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:32.675 [2024-11-29 07:39:22.550805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:32.675 [2024-11-29 07:39:22.550936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.675 [2024-11-29 07:39:22.551117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.675 [2024-11-29 07:39:22.551179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.675 [2024-11-29 07:39:22.551191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.675 [2024-11-29 07:39:22.562631] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:32.675 [2024-11-29 07:39:22.562679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.675 [2024-11-29 07:39:22.562696] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:32.675 [2024-11-29 07:39:22.562707] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.675 [2024-11-29 07:39:22.564908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.675 [2024-11-29 07:39:22.564945] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:32.675 [2024-11-29 07:39:22.566541] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ba8bd6b7-6e96-4b4b-9f00-a087e32d4d80 00:07:32.675 [2024-11-29 07:39:22.566667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ba8bd6b7-6e96-4b4b-9f00-a087e32d4d80 is claimed 00:07:32.675 [2024-11-29 07:39:22.566783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 59936881-9c5b-4453-a5d1-1c0e226e5943 00:07:32.675 [2024-11-29 07:39:22.566802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 59936881-9c5b-4453-a5d1-1c0e226e5943 is claimed 00:07:32.675 [2024-11-29 07:39:22.566925] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 59936881-9c5b-4453-a5d1-1c0e226e5943 (2) smaller than existing raid bdev Raid (3) 00:07:32.675 [2024-11-29 07:39:22.566944] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ba8bd6b7-6e96-4b4b-9f00-a087e32d4d80: File exists 00:07:32.675 [2024-11-29 07:39:22.566982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:32.675 [2024-11-29 07:39:22.566993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:32.675 [2024-11-29 07:39:22.567260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:32.675 [2024-11-29 07:39:22.567424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:32.675 [2024-11-29 07:39:22.567439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:32.675 [2024-11-29 07:39:22.567615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.675 pt0 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.675 [2024-11-29 07:39:22.591164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.675 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60089 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60089 ']' 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60089 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60089 00:07:32.935 killing process with pid 60089 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60089' 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60089 00:07:32.935 [2024-11-29 07:39:22.665328] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.935 [2024-11-29 07:39:22.665387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.935 [2024-11-29 07:39:22.665430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.935 [2024-11-29 07:39:22.665438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:32.935 07:39:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60089 00:07:34.314 [2024-11-29 07:39:24.034452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.254 07:39:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:35.254 00:07:35.254 real 0m4.471s 00:07:35.254 user 0m4.703s 00:07:35.254 sys 0m0.516s 00:07:35.254 ************************************ 00:07:35.254 END TEST raid1_resize_superblock_test 00:07:35.254 ************************************ 00:07:35.254 07:39:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.254 07:39:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.254 07:39:25 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:35.254 07:39:25 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:35.254 07:39:25 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:35.254 07:39:25 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:35.254 07:39:25 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:35.513 07:39:25 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:35.513 07:39:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.513 07:39:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.513 07:39:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.513 ************************************ 00:07:35.513 START TEST raid_function_test_raid0 00:07:35.513 ************************************ 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60192 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60192' 00:07:35.513 Process raid pid: 60192 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60192 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60192 ']' 00:07:35.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.513 07:39:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:35.513 [2024-11-29 07:39:25.294832] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:35.513 [2024-11-29 07:39:25.294946] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.772 [2024-11-29 07:39:25.467643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.772 [2024-11-29 07:39:25.576064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.032 [2024-11-29 07:39:25.767039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.032 [2024-11-29 07:39:25.767074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:36.292 Base_1 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:36.292 Base_2 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:36.292 [2024-11-29 07:39:26.205101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:36.292 [2024-11-29 07:39:26.206873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:36.292 [2024-11-29 07:39:26.206942] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:36.292 [2024-11-29 07:39:26.206955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:36.292 [2024-11-29 07:39:26.207215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:36.292 [2024-11-29 07:39:26.207364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:36.292 [2024-11-29 07:39:26.207377] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:36.292 [2024-11-29 07:39:26.207525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:36.292 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:36.553 [2024-11-29 07:39:26.448766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:36.553 /dev/nbd0 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.553 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.813 1+0 records in 00:07:36.813 1+0 records out 00:07:36.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00250607 s, 1.6 MB/s 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:36.813 { 00:07:36.813 "nbd_device": "/dev/nbd0", 00:07:36.813 "bdev_name": "raid" 00:07:36.813 } 00:07:36.813 ]' 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:36.813 { 00:07:36.813 "nbd_device": "/dev/nbd0", 00:07:36.813 "bdev_name": "raid" 00:07:36.813 } 00:07:36.813 ]' 00:07:36.813 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.073 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:37.073 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.073 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:37.073 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:37.073 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:37.073 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:37.073 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:37.073 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:37.073 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:37.073 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:37.073 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:37.074 4096+0 records in 00:07:37.074 4096+0 records out 00:07:37.074 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0338157 s, 62.0 MB/s 00:07:37.074 07:39:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:37.334 4096+0 records in 00:07:37.334 4096+0 records out 00:07:37.334 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.186987 s, 11.2 MB/s 00:07:37.334 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:37.334 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:37.335 128+0 records in 00:07:37.335 128+0 records out 00:07:37.335 65536 bytes (66 kB, 64 KiB) copied, 0.00121254 s, 54.0 MB/s 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:37.335 2035+0 records in 00:07:37.335 2035+0 records out 00:07:37.335 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0139602 s, 74.6 MB/s 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:37.335 456+0 records in 00:07:37.335 456+0 records out 00:07:37.335 233472 bytes (233 kB, 228 KiB) copied, 0.00381118 s, 61.3 MB/s 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.335 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:37.596 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.596 [2024-11-29 07:39:27.360804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.596 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.596 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.596 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.596 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.596 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.596 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:37.596 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.596 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:37.596 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:37.596 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60192 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60192 ']' 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60192 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60192 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60192' 00:07:37.856 killing process with pid 60192 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60192 00:07:37.856 [2024-11-29 07:39:27.668613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.856 [2024-11-29 07:39:27.668725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.856 [2024-11-29 07:39:27.668774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.856 07:39:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60192 00:07:37.856 [2024-11-29 07:39:27.668791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:38.116 [2024-11-29 07:39:27.865778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.056 07:39:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:39.056 00:07:39.056 real 0m3.738s 00:07:39.056 user 0m4.304s 00:07:39.056 sys 0m0.967s 00:07:39.056 07:39:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.056 ************************************ 00:07:39.056 END TEST raid_function_test_raid0 00:07:39.056 ************************************ 00:07:39.056 07:39:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:39.316 07:39:28 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:39.316 07:39:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.316 07:39:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.316 07:39:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.316 ************************************ 00:07:39.316 START TEST raid_function_test_concat 00:07:39.316 ************************************ 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60321 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:39.316 Process raid pid: 60321 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60321' 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60321 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60321 ']' 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.316 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:39.317 [2024-11-29 07:39:29.099138] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:39.317 [2024-11-29 07:39:29.099252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.576 [2024-11-29 07:39:29.274841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.576 [2024-11-29 07:39:29.387502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.836 [2024-11-29 07:39:29.585475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.836 [2024-11-29 07:39:29.585522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.097 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.097 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:40.097 07:39:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:40.097 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.097 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:40.097 Base_1 00:07:40.097 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.097 07:39:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:40.097 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.097 07:39:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:40.097 Base_2 00:07:40.097 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.097 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:40.097 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.097 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:40.097 [2024-11-29 07:39:30.011162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:40.097 [2024-11-29 07:39:30.012939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:40.097 [2024-11-29 07:39:30.013024] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:40.097 [2024-11-29 07:39:30.013035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:40.097 [2024-11-29 07:39:30.013290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:40.097 [2024-11-29 07:39:30.013441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:40.097 [2024-11-29 07:39:30.013458] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:40.097 [2024-11-29 07:39:30.013593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.097 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.097 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:40.097 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.097 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:40.097 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:40.097 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:40.357 [2024-11-29 07:39:30.246794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:40.357 /dev/nbd0 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.357 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.357 1+0 records in 00:07:40.357 1+0 records out 00:07:40.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427265 s, 9.6 MB/s 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:40.617 { 00:07:40.617 "nbd_device": "/dev/nbd0", 00:07:40.617 "bdev_name": "raid" 00:07:40.617 } 00:07:40.617 ]' 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:40.617 { 00:07:40.617 "nbd_device": "/dev/nbd0", 00:07:40.617 "bdev_name": "raid" 00:07:40.617 } 00:07:40.617 ]' 00:07:40.617 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:40.878 4096+0 records in 00:07:40.878 4096+0 records out 00:07:40.878 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.033145 s, 63.3 MB/s 00:07:40.878 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:41.138 4096+0 records in 00:07:41.138 4096+0 records out 00:07:41.138 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.198026 s, 10.6 MB/s 00:07:41.138 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:41.138 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:41.138 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:41.138 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:41.138 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:41.138 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:41.138 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:41.138 128+0 records in 00:07:41.138 128+0 records out 00:07:41.138 65536 bytes (66 kB, 64 KiB) copied, 0.00108495 s, 60.4 MB/s 00:07:41.138 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:41.138 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:41.139 2035+0 records in 00:07:41.139 2035+0 records out 00:07:41.139 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0129668 s, 80.4 MB/s 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:41.139 456+0 records in 00:07:41.139 456+0 records out 00:07:41.139 233472 bytes (233 kB, 228 KiB) copied, 0.00320985 s, 72.7 MB/s 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.139 07:39:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:41.399 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:41.399 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:41.399 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:41.399 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.399 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.399 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:41.399 [2024-11-29 07:39:31.160759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.400 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:41.400 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.400 07:39:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:41.400 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:41.400 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:41.660 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60321 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60321 ']' 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60321 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60321 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.661 killing process with pid 60321 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60321' 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60321 00:07:41.661 [2024-11-29 07:39:31.471707] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.661 [2024-11-29 07:39:31.471813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.661 [2024-11-29 07:39:31.471871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.661 [2024-11-29 07:39:31.471882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:41.661 07:39:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60321 00:07:41.921 [2024-11-29 07:39:31.668615] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.861 07:39:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:42.861 00:07:42.861 real 0m3.738s 00:07:42.861 user 0m4.346s 00:07:42.861 sys 0m0.913s 00:07:42.861 07:39:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.861 07:39:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:42.861 ************************************ 00:07:42.861 END TEST raid_function_test_concat 00:07:42.861 ************************************ 00:07:42.861 07:39:32 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:42.861 07:39:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.861 07:39:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.861 07:39:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.121 ************************************ 00:07:43.121 START TEST raid0_resize_test 00:07:43.121 ************************************ 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60437 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:43.121 Process raid pid: 60437 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60437' 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60437 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60437 ']' 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.121 07:39:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.121 [2024-11-29 07:39:32.901652] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:43.121 [2024-11-29 07:39:32.901767] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.121 [2024-11-29 07:39:33.056371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.381 [2024-11-29 07:39:33.165322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.641 [2024-11-29 07:39:33.360196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.641 [2024-11-29 07:39:33.360237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.901 Base_1 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.901 Base_2 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.901 [2024-11-29 07:39:33.742337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:43.901 [2024-11-29 07:39:33.744140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:43.901 [2024-11-29 07:39:33.744195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:43.901 [2024-11-29 07:39:33.744208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:43.901 [2024-11-29 07:39:33.744458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:43.901 [2024-11-29 07:39:33.744600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:43.901 [2024-11-29 07:39:33.744614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:43.901 [2024-11-29 07:39:33.744747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.901 [2024-11-29 07:39:33.750303] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:43.901 [2024-11-29 07:39:33.750331] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:43.901 true 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.901 [2024-11-29 07:39:33.762437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:43.901 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:43.902 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.902 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.902 [2024-11-29 07:39:33.810205] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:43.902 [2024-11-29 07:39:33.810228] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:43.902 [2024-11-29 07:39:33.810253] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:43.902 true 00:07:43.902 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.902 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:43.902 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:43.902 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.902 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.902 [2024-11-29 07:39:33.826353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.902 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60437 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60437 ']' 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60437 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60437 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.162 killing process with pid 60437 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60437' 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60437 00:07:44.162 [2024-11-29 07:39:33.905426] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.162 [2024-11-29 07:39:33.905493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.162 [2024-11-29 07:39:33.905536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.162 [2024-11-29 07:39:33.905545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:44.162 07:39:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60437 00:07:44.162 [2024-11-29 07:39:33.922873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.163 07:39:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:45.163 00:07:45.163 real 0m2.191s 00:07:45.163 user 0m2.332s 00:07:45.163 sys 0m0.329s 00:07:45.163 07:39:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.163 07:39:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.163 ************************************ 00:07:45.163 END TEST raid0_resize_test 00:07:45.163 ************************************ 00:07:45.163 07:39:35 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:45.163 07:39:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.163 07:39:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.163 07:39:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.163 ************************************ 00:07:45.163 START TEST raid1_resize_test 00:07:45.163 ************************************ 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60498 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.163 Process raid pid: 60498 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60498' 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60498 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60498 ']' 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.163 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.420 [2024-11-29 07:39:35.157806] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:45.420 [2024-11-29 07:39:35.157931] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.420 [2024-11-29 07:39:35.326475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.679 [2024-11-29 07:39:35.434819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.938 [2024-11-29 07:39:35.633031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.938 [2024-11-29 07:39:35.633069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.198 Base_1 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.198 Base_2 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.198 07:39:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.198 [2024-11-29 07:39:36.002786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:46.198 [2024-11-29 07:39:36.004501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:46.198 [2024-11-29 07:39:36.004571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:46.198 [2024-11-29 07:39:36.004583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:46.198 [2024-11-29 07:39:36.004843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:46.198 [2024-11-29 07:39:36.004984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:46.198 [2024-11-29 07:39:36.004996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:46.198 [2024-11-29 07:39:36.005166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.198 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.198 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:46.198 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.198 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.198 [2024-11-29 07:39:36.010754] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:46.198 [2024-11-29 07:39:36.010789] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:46.198 true 00:07:46.198 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.198 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:46.198 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:46.198 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.198 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.198 [2024-11-29 07:39:36.026882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.198 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.198 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.199 [2024-11-29 07:39:36.074611] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:46.199 [2024-11-29 07:39:36.074635] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:46.199 [2024-11-29 07:39:36.074661] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:46.199 true 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:46.199 [2024-11-29 07:39:36.086772] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60498 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60498 ']' 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60498 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.199 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60498 00:07:46.459 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.459 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.459 killing process with pid 60498 00:07:46.459 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60498' 00:07:46.459 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60498 00:07:46.459 [2024-11-29 07:39:36.172982] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.459 [2024-11-29 07:39:36.173077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.459 07:39:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60498 00:07:46.459 [2024-11-29 07:39:36.173553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.459 [2024-11-29 07:39:36.173587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:46.459 [2024-11-29 07:39:36.189665] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.397 07:39:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:47.397 00:07:47.397 real 0m2.198s 00:07:47.397 user 0m2.330s 00:07:47.397 sys 0m0.330s 00:07:47.397 07:39:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.397 07:39:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.397 ************************************ 00:07:47.397 END TEST raid1_resize_test 00:07:47.397 ************************************ 00:07:47.397 07:39:37 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:47.397 07:39:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:47.397 07:39:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:47.397 07:39:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:47.397 07:39:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.397 07:39:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.397 ************************************ 00:07:47.397 START TEST raid_state_function_test 00:07:47.397 ************************************ 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:47.397 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60561 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:47.658 Process raid pid: 60561 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60561' 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60561 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60561 ']' 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.658 07:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.658 [2024-11-29 07:39:37.430413] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:47.658 [2024-11-29 07:39:37.430523] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.658 [2024-11-29 07:39:37.601532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.918 [2024-11-29 07:39:37.709463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.178 [2024-11-29 07:39:37.909343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.178 [2024-11-29 07:39:37.909383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.438 [2024-11-29 07:39:38.235018] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.438 [2024-11-29 07:39:38.235066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.438 [2024-11-29 07:39:38.235076] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.438 [2024-11-29 07:39:38.235085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.438 "name": "Existed_Raid", 00:07:48.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.438 "strip_size_kb": 64, 00:07:48.438 "state": "configuring", 00:07:48.438 "raid_level": "raid0", 00:07:48.438 "superblock": false, 00:07:48.438 "num_base_bdevs": 2, 00:07:48.438 "num_base_bdevs_discovered": 0, 00:07:48.438 "num_base_bdevs_operational": 2, 00:07:48.438 "base_bdevs_list": [ 00:07:48.438 { 00:07:48.438 "name": "BaseBdev1", 00:07:48.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.438 "is_configured": false, 00:07:48.438 "data_offset": 0, 00:07:48.438 "data_size": 0 00:07:48.438 }, 00:07:48.438 { 00:07:48.438 "name": "BaseBdev2", 00:07:48.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.438 "is_configured": false, 00:07:48.438 "data_offset": 0, 00:07:48.438 "data_size": 0 00:07:48.438 } 00:07:48.438 ] 00:07:48.438 }' 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.438 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.009 [2024-11-29 07:39:38.670240] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.009 [2024-11-29 07:39:38.670280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.009 [2024-11-29 07:39:38.682197] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.009 [2024-11-29 07:39:38.682233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.009 [2024-11-29 07:39:38.682257] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.009 [2024-11-29 07:39:38.682269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.009 [2024-11-29 07:39:38.729271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.009 BaseBdev1 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.009 [ 00:07:49.009 { 00:07:49.009 "name": "BaseBdev1", 00:07:49.009 "aliases": [ 00:07:49.009 "fc460fba-0fbb-4cfd-bce2-22e770a48dfa" 00:07:49.009 ], 00:07:49.009 "product_name": "Malloc disk", 00:07:49.009 "block_size": 512, 00:07:49.009 "num_blocks": 65536, 00:07:49.009 "uuid": "fc460fba-0fbb-4cfd-bce2-22e770a48dfa", 00:07:49.009 "assigned_rate_limits": { 00:07:49.009 "rw_ios_per_sec": 0, 00:07:49.009 "rw_mbytes_per_sec": 0, 00:07:49.009 "r_mbytes_per_sec": 0, 00:07:49.009 "w_mbytes_per_sec": 0 00:07:49.009 }, 00:07:49.009 "claimed": true, 00:07:49.009 "claim_type": "exclusive_write", 00:07:49.009 "zoned": false, 00:07:49.009 "supported_io_types": { 00:07:49.009 "read": true, 00:07:49.009 "write": true, 00:07:49.009 "unmap": true, 00:07:49.009 "flush": true, 00:07:49.009 "reset": true, 00:07:49.009 "nvme_admin": false, 00:07:49.009 "nvme_io": false, 00:07:49.009 "nvme_io_md": false, 00:07:49.009 "write_zeroes": true, 00:07:49.009 "zcopy": true, 00:07:49.009 "get_zone_info": false, 00:07:49.009 "zone_management": false, 00:07:49.009 "zone_append": false, 00:07:49.009 "compare": false, 00:07:49.009 "compare_and_write": false, 00:07:49.009 "abort": true, 00:07:49.009 "seek_hole": false, 00:07:49.009 "seek_data": false, 00:07:49.009 "copy": true, 00:07:49.009 "nvme_iov_md": false 00:07:49.009 }, 00:07:49.009 "memory_domains": [ 00:07:49.009 { 00:07:49.009 "dma_device_id": "system", 00:07:49.009 "dma_device_type": 1 00:07:49.009 }, 00:07:49.009 { 00:07:49.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.009 "dma_device_type": 2 00:07:49.009 } 00:07:49.009 ], 00:07:49.009 "driver_specific": {} 00:07:49.009 } 00:07:49.009 ] 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.009 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.010 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.010 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.010 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.010 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.010 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.010 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.010 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.010 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.010 "name": "Existed_Raid", 00:07:49.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.010 "strip_size_kb": 64, 00:07:49.010 "state": "configuring", 00:07:49.010 "raid_level": "raid0", 00:07:49.010 "superblock": false, 00:07:49.010 "num_base_bdevs": 2, 00:07:49.010 "num_base_bdevs_discovered": 1, 00:07:49.010 "num_base_bdevs_operational": 2, 00:07:49.010 "base_bdevs_list": [ 00:07:49.010 { 00:07:49.010 "name": "BaseBdev1", 00:07:49.010 "uuid": "fc460fba-0fbb-4cfd-bce2-22e770a48dfa", 00:07:49.010 "is_configured": true, 00:07:49.010 "data_offset": 0, 00:07:49.010 "data_size": 65536 00:07:49.010 }, 00:07:49.010 { 00:07:49.010 "name": "BaseBdev2", 00:07:49.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.010 "is_configured": false, 00:07:49.010 "data_offset": 0, 00:07:49.010 "data_size": 0 00:07:49.010 } 00:07:49.010 ] 00:07:49.010 }' 00:07:49.010 07:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.010 07:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.270 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.270 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.270 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.270 [2024-11-29 07:39:39.148606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.270 [2024-11-29 07:39:39.148662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:49.270 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.270 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.271 [2024-11-29 07:39:39.160646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.271 [2024-11-29 07:39:39.162423] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.271 [2024-11-29 07:39:39.162461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.271 "name": "Existed_Raid", 00:07:49.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.271 "strip_size_kb": 64, 00:07:49.271 "state": "configuring", 00:07:49.271 "raid_level": "raid0", 00:07:49.271 "superblock": false, 00:07:49.271 "num_base_bdevs": 2, 00:07:49.271 "num_base_bdevs_discovered": 1, 00:07:49.271 "num_base_bdevs_operational": 2, 00:07:49.271 "base_bdevs_list": [ 00:07:49.271 { 00:07:49.271 "name": "BaseBdev1", 00:07:49.271 "uuid": "fc460fba-0fbb-4cfd-bce2-22e770a48dfa", 00:07:49.271 "is_configured": true, 00:07:49.271 "data_offset": 0, 00:07:49.271 "data_size": 65536 00:07:49.271 }, 00:07:49.271 { 00:07:49.271 "name": "BaseBdev2", 00:07:49.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.271 "is_configured": false, 00:07:49.271 "data_offset": 0, 00:07:49.271 "data_size": 0 00:07:49.271 } 00:07:49.271 ] 00:07:49.271 }' 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.271 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.841 [2024-11-29 07:39:39.615653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:49.841 [2024-11-29 07:39:39.615702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:49.841 [2024-11-29 07:39:39.615726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:49.841 [2024-11-29 07:39:39.615982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:49.841 [2024-11-29 07:39:39.616194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:49.841 [2024-11-29 07:39:39.616222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:49.841 [2024-11-29 07:39:39.616472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.841 BaseBdev2 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.841 [ 00:07:49.841 { 00:07:49.841 "name": "BaseBdev2", 00:07:49.841 "aliases": [ 00:07:49.841 "8334e12b-6a63-480b-868c-04ac49e5c2fc" 00:07:49.841 ], 00:07:49.841 "product_name": "Malloc disk", 00:07:49.841 "block_size": 512, 00:07:49.841 "num_blocks": 65536, 00:07:49.841 "uuid": "8334e12b-6a63-480b-868c-04ac49e5c2fc", 00:07:49.841 "assigned_rate_limits": { 00:07:49.841 "rw_ios_per_sec": 0, 00:07:49.841 "rw_mbytes_per_sec": 0, 00:07:49.841 "r_mbytes_per_sec": 0, 00:07:49.841 "w_mbytes_per_sec": 0 00:07:49.841 }, 00:07:49.841 "claimed": true, 00:07:49.841 "claim_type": "exclusive_write", 00:07:49.841 "zoned": false, 00:07:49.841 "supported_io_types": { 00:07:49.841 "read": true, 00:07:49.841 "write": true, 00:07:49.841 "unmap": true, 00:07:49.841 "flush": true, 00:07:49.841 "reset": true, 00:07:49.841 "nvme_admin": false, 00:07:49.841 "nvme_io": false, 00:07:49.841 "nvme_io_md": false, 00:07:49.841 "write_zeroes": true, 00:07:49.841 "zcopy": true, 00:07:49.841 "get_zone_info": false, 00:07:49.841 "zone_management": false, 00:07:49.841 "zone_append": false, 00:07:49.841 "compare": false, 00:07:49.841 "compare_and_write": false, 00:07:49.841 "abort": true, 00:07:49.841 "seek_hole": false, 00:07:49.841 "seek_data": false, 00:07:49.841 "copy": true, 00:07:49.841 "nvme_iov_md": false 00:07:49.841 }, 00:07:49.841 "memory_domains": [ 00:07:49.841 { 00:07:49.841 "dma_device_id": "system", 00:07:49.841 "dma_device_type": 1 00:07:49.841 }, 00:07:49.841 { 00:07:49.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.841 "dma_device_type": 2 00:07:49.841 } 00:07:49.841 ], 00:07:49.841 "driver_specific": {} 00:07:49.841 } 00:07:49.841 ] 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.841 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.841 "name": "Existed_Raid", 00:07:49.841 "uuid": "8039e509-5eb8-49af-9156-4eba2ef0d2fd", 00:07:49.841 "strip_size_kb": 64, 00:07:49.842 "state": "online", 00:07:49.842 "raid_level": "raid0", 00:07:49.842 "superblock": false, 00:07:49.842 "num_base_bdevs": 2, 00:07:49.842 "num_base_bdevs_discovered": 2, 00:07:49.842 "num_base_bdevs_operational": 2, 00:07:49.842 "base_bdevs_list": [ 00:07:49.842 { 00:07:49.842 "name": "BaseBdev1", 00:07:49.842 "uuid": "fc460fba-0fbb-4cfd-bce2-22e770a48dfa", 00:07:49.842 "is_configured": true, 00:07:49.842 "data_offset": 0, 00:07:49.842 "data_size": 65536 00:07:49.842 }, 00:07:49.842 { 00:07:49.842 "name": "BaseBdev2", 00:07:49.842 "uuid": "8334e12b-6a63-480b-868c-04ac49e5c2fc", 00:07:49.842 "is_configured": true, 00:07:49.842 "data_offset": 0, 00:07:49.842 "data_size": 65536 00:07:49.842 } 00:07:49.842 ] 00:07:49.842 }' 00:07:49.842 07:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.842 07:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.101 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:50.101 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:50.101 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.101 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.101 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.101 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.101 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.101 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:50.101 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.101 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.101 [2024-11-29 07:39:40.035277] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.361 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.361 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.361 "name": "Existed_Raid", 00:07:50.362 "aliases": [ 00:07:50.362 "8039e509-5eb8-49af-9156-4eba2ef0d2fd" 00:07:50.362 ], 00:07:50.362 "product_name": "Raid Volume", 00:07:50.362 "block_size": 512, 00:07:50.362 "num_blocks": 131072, 00:07:50.362 "uuid": "8039e509-5eb8-49af-9156-4eba2ef0d2fd", 00:07:50.362 "assigned_rate_limits": { 00:07:50.362 "rw_ios_per_sec": 0, 00:07:50.362 "rw_mbytes_per_sec": 0, 00:07:50.362 "r_mbytes_per_sec": 0, 00:07:50.362 "w_mbytes_per_sec": 0 00:07:50.362 }, 00:07:50.362 "claimed": false, 00:07:50.362 "zoned": false, 00:07:50.362 "supported_io_types": { 00:07:50.362 "read": true, 00:07:50.362 "write": true, 00:07:50.362 "unmap": true, 00:07:50.362 "flush": true, 00:07:50.362 "reset": true, 00:07:50.362 "nvme_admin": false, 00:07:50.362 "nvme_io": false, 00:07:50.362 "nvme_io_md": false, 00:07:50.362 "write_zeroes": true, 00:07:50.362 "zcopy": false, 00:07:50.362 "get_zone_info": false, 00:07:50.362 "zone_management": false, 00:07:50.362 "zone_append": false, 00:07:50.362 "compare": false, 00:07:50.362 "compare_and_write": false, 00:07:50.362 "abort": false, 00:07:50.362 "seek_hole": false, 00:07:50.362 "seek_data": false, 00:07:50.362 "copy": false, 00:07:50.362 "nvme_iov_md": false 00:07:50.362 }, 00:07:50.362 "memory_domains": [ 00:07:50.362 { 00:07:50.362 "dma_device_id": "system", 00:07:50.362 "dma_device_type": 1 00:07:50.362 }, 00:07:50.362 { 00:07:50.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.362 "dma_device_type": 2 00:07:50.362 }, 00:07:50.362 { 00:07:50.362 "dma_device_id": "system", 00:07:50.362 "dma_device_type": 1 00:07:50.362 }, 00:07:50.362 { 00:07:50.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.362 "dma_device_type": 2 00:07:50.362 } 00:07:50.362 ], 00:07:50.362 "driver_specific": { 00:07:50.362 "raid": { 00:07:50.362 "uuid": "8039e509-5eb8-49af-9156-4eba2ef0d2fd", 00:07:50.362 "strip_size_kb": 64, 00:07:50.362 "state": "online", 00:07:50.362 "raid_level": "raid0", 00:07:50.362 "superblock": false, 00:07:50.362 "num_base_bdevs": 2, 00:07:50.362 "num_base_bdevs_discovered": 2, 00:07:50.362 "num_base_bdevs_operational": 2, 00:07:50.362 "base_bdevs_list": [ 00:07:50.362 { 00:07:50.362 "name": "BaseBdev1", 00:07:50.362 "uuid": "fc460fba-0fbb-4cfd-bce2-22e770a48dfa", 00:07:50.362 "is_configured": true, 00:07:50.362 "data_offset": 0, 00:07:50.362 "data_size": 65536 00:07:50.362 }, 00:07:50.362 { 00:07:50.362 "name": "BaseBdev2", 00:07:50.362 "uuid": "8334e12b-6a63-480b-868c-04ac49e5c2fc", 00:07:50.362 "is_configured": true, 00:07:50.362 "data_offset": 0, 00:07:50.362 "data_size": 65536 00:07:50.362 } 00:07:50.362 ] 00:07:50.362 } 00:07:50.362 } 00:07:50.362 }' 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:50.362 BaseBdev2' 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.362 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.362 [2024-11-29 07:39:40.230660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:50.362 [2024-11-29 07:39:40.230696] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.362 [2024-11-29 07:39:40.230745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.622 "name": "Existed_Raid", 00:07:50.622 "uuid": "8039e509-5eb8-49af-9156-4eba2ef0d2fd", 00:07:50.622 "strip_size_kb": 64, 00:07:50.622 "state": "offline", 00:07:50.622 "raid_level": "raid0", 00:07:50.622 "superblock": false, 00:07:50.622 "num_base_bdevs": 2, 00:07:50.622 "num_base_bdevs_discovered": 1, 00:07:50.622 "num_base_bdevs_operational": 1, 00:07:50.622 "base_bdevs_list": [ 00:07:50.622 { 00:07:50.622 "name": null, 00:07:50.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.622 "is_configured": false, 00:07:50.622 "data_offset": 0, 00:07:50.622 "data_size": 65536 00:07:50.622 }, 00:07:50.622 { 00:07:50.622 "name": "BaseBdev2", 00:07:50.622 "uuid": "8334e12b-6a63-480b-868c-04ac49e5c2fc", 00:07:50.622 "is_configured": true, 00:07:50.622 "data_offset": 0, 00:07:50.622 "data_size": 65536 00:07:50.622 } 00:07:50.622 ] 00:07:50.622 }' 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.622 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.883 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:50.883 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:50.883 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.883 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:50.883 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.883 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.883 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.883 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:50.883 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:50.883 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:50.883 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.883 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.883 [2024-11-29 07:39:40.808532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:50.883 [2024-11-29 07:39:40.808589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60561 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60561 ']' 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60561 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60561 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.144 killing process with pid 60561 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60561' 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60561 00:07:51.144 [2024-11-29 07:39:40.994710] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.144 07:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60561 00:07:51.144 [2024-11-29 07:39:41.011353] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.525 07:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:52.525 00:07:52.525 real 0m4.754s 00:07:52.525 user 0m6.813s 00:07:52.525 sys 0m0.765s 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.526 ************************************ 00:07:52.526 END TEST raid_state_function_test 00:07:52.526 ************************************ 00:07:52.526 07:39:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:52.526 07:39:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:52.526 07:39:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.526 07:39:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.526 ************************************ 00:07:52.526 START TEST raid_state_function_test_sb 00:07:52.526 ************************************ 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60803 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60803' 00:07:52.526 Process raid pid: 60803 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60803 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60803 ']' 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.526 07:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.526 [2024-11-29 07:39:42.254362] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:52.526 [2024-11-29 07:39:42.254495] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.526 [2024-11-29 07:39:42.425524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.785 [2024-11-29 07:39:42.540925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.044 [2024-11-29 07:39:42.732879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.044 [2024-11-29 07:39:42.732922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.357 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.358 [2024-11-29 07:39:43.063160] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.358 [2024-11-29 07:39:43.063208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.358 [2024-11-29 07:39:43.063218] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.358 [2024-11-29 07:39:43.063228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.358 "name": "Existed_Raid", 00:07:53.358 "uuid": "6989be97-95e3-4ad9-a02b-09bf095bff2b", 00:07:53.358 "strip_size_kb": 64, 00:07:53.358 "state": "configuring", 00:07:53.358 "raid_level": "raid0", 00:07:53.358 "superblock": true, 00:07:53.358 "num_base_bdevs": 2, 00:07:53.358 "num_base_bdevs_discovered": 0, 00:07:53.358 "num_base_bdevs_operational": 2, 00:07:53.358 "base_bdevs_list": [ 00:07:53.358 { 00:07:53.358 "name": "BaseBdev1", 00:07:53.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.358 "is_configured": false, 00:07:53.358 "data_offset": 0, 00:07:53.358 "data_size": 0 00:07:53.358 }, 00:07:53.358 { 00:07:53.358 "name": "BaseBdev2", 00:07:53.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.358 "is_configured": false, 00:07:53.358 "data_offset": 0, 00:07:53.358 "data_size": 0 00:07:53.358 } 00:07:53.358 ] 00:07:53.358 }' 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.358 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.617 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:53.617 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.617 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.618 [2024-11-29 07:39:43.450425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:53.618 [2024-11-29 07:39:43.450461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.618 [2024-11-29 07:39:43.462408] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.618 [2024-11-29 07:39:43.462455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.618 [2024-11-29 07:39:43.462479] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.618 [2024-11-29 07:39:43.462490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.618 [2024-11-29 07:39:43.505916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.618 BaseBdev1 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.618 [ 00:07:53.618 { 00:07:53.618 "name": "BaseBdev1", 00:07:53.618 "aliases": [ 00:07:53.618 "73e60f51-31ab-4bba-84e3-87c607f74fcb" 00:07:53.618 ], 00:07:53.618 "product_name": "Malloc disk", 00:07:53.618 "block_size": 512, 00:07:53.618 "num_blocks": 65536, 00:07:53.618 "uuid": "73e60f51-31ab-4bba-84e3-87c607f74fcb", 00:07:53.618 "assigned_rate_limits": { 00:07:53.618 "rw_ios_per_sec": 0, 00:07:53.618 "rw_mbytes_per_sec": 0, 00:07:53.618 "r_mbytes_per_sec": 0, 00:07:53.618 "w_mbytes_per_sec": 0 00:07:53.618 }, 00:07:53.618 "claimed": true, 00:07:53.618 "claim_type": "exclusive_write", 00:07:53.618 "zoned": false, 00:07:53.618 "supported_io_types": { 00:07:53.618 "read": true, 00:07:53.618 "write": true, 00:07:53.618 "unmap": true, 00:07:53.618 "flush": true, 00:07:53.618 "reset": true, 00:07:53.618 "nvme_admin": false, 00:07:53.618 "nvme_io": false, 00:07:53.618 "nvme_io_md": false, 00:07:53.618 "write_zeroes": true, 00:07:53.618 "zcopy": true, 00:07:53.618 "get_zone_info": false, 00:07:53.618 "zone_management": false, 00:07:53.618 "zone_append": false, 00:07:53.618 "compare": false, 00:07:53.618 "compare_and_write": false, 00:07:53.618 "abort": true, 00:07:53.618 "seek_hole": false, 00:07:53.618 "seek_data": false, 00:07:53.618 "copy": true, 00:07:53.618 "nvme_iov_md": false 00:07:53.618 }, 00:07:53.618 "memory_domains": [ 00:07:53.618 { 00:07:53.618 "dma_device_id": "system", 00:07:53.618 "dma_device_type": 1 00:07:53.618 }, 00:07:53.618 { 00:07:53.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.618 "dma_device_type": 2 00:07:53.618 } 00:07:53.618 ], 00:07:53.618 "driver_specific": {} 00:07:53.618 } 00:07:53.618 ] 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.618 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.878 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.878 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.878 "name": "Existed_Raid", 00:07:53.878 "uuid": "1b7cd9c8-6fe3-4640-936a-269377db587e", 00:07:53.878 "strip_size_kb": 64, 00:07:53.878 "state": "configuring", 00:07:53.878 "raid_level": "raid0", 00:07:53.878 "superblock": true, 00:07:53.878 "num_base_bdevs": 2, 00:07:53.878 "num_base_bdevs_discovered": 1, 00:07:53.878 "num_base_bdevs_operational": 2, 00:07:53.878 "base_bdevs_list": [ 00:07:53.878 { 00:07:53.878 "name": "BaseBdev1", 00:07:53.878 "uuid": "73e60f51-31ab-4bba-84e3-87c607f74fcb", 00:07:53.878 "is_configured": true, 00:07:53.878 "data_offset": 2048, 00:07:53.878 "data_size": 63488 00:07:53.878 }, 00:07:53.878 { 00:07:53.878 "name": "BaseBdev2", 00:07:53.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.878 "is_configured": false, 00:07:53.878 "data_offset": 0, 00:07:53.878 "data_size": 0 00:07:53.878 } 00:07:53.878 ] 00:07:53.878 }' 00:07:53.878 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.878 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.140 [2024-11-29 07:39:43.941202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.140 [2024-11-29 07:39:43.941291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.140 [2024-11-29 07:39:43.953232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.140 [2024-11-29 07:39:43.954964] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.140 [2024-11-29 07:39:43.955037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.140 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.140 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.140 "name": "Existed_Raid", 00:07:54.140 "uuid": "0fbf9233-af99-4f50-88ba-b14064720182", 00:07:54.140 "strip_size_kb": 64, 00:07:54.140 "state": "configuring", 00:07:54.140 "raid_level": "raid0", 00:07:54.140 "superblock": true, 00:07:54.140 "num_base_bdevs": 2, 00:07:54.140 "num_base_bdevs_discovered": 1, 00:07:54.140 "num_base_bdevs_operational": 2, 00:07:54.140 "base_bdevs_list": [ 00:07:54.140 { 00:07:54.140 "name": "BaseBdev1", 00:07:54.140 "uuid": "73e60f51-31ab-4bba-84e3-87c607f74fcb", 00:07:54.140 "is_configured": true, 00:07:54.140 "data_offset": 2048, 00:07:54.140 "data_size": 63488 00:07:54.140 }, 00:07:54.140 { 00:07:54.140 "name": "BaseBdev2", 00:07:54.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.140 "is_configured": false, 00:07:54.140 "data_offset": 0, 00:07:54.140 "data_size": 0 00:07:54.140 } 00:07:54.140 ] 00:07:54.140 }' 00:07:54.140 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.140 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.710 [2024-11-29 07:39:44.413215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.710 [2024-11-29 07:39:44.413553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:54.710 [2024-11-29 07:39:44.413571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.710 [2024-11-29 07:39:44.413827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:54.710 [2024-11-29 07:39:44.413984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:54.710 [2024-11-29 07:39:44.413998] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:54.710 [2024-11-29 07:39:44.414143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.710 BaseBdev2 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.710 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.710 [ 00:07:54.710 { 00:07:54.710 "name": "BaseBdev2", 00:07:54.710 "aliases": [ 00:07:54.710 "f4bce659-cabb-45ce-82b7-84eaa70accc6" 00:07:54.710 ], 00:07:54.710 "product_name": "Malloc disk", 00:07:54.710 "block_size": 512, 00:07:54.710 "num_blocks": 65536, 00:07:54.710 "uuid": "f4bce659-cabb-45ce-82b7-84eaa70accc6", 00:07:54.710 "assigned_rate_limits": { 00:07:54.710 "rw_ios_per_sec": 0, 00:07:54.711 "rw_mbytes_per_sec": 0, 00:07:54.711 "r_mbytes_per_sec": 0, 00:07:54.711 "w_mbytes_per_sec": 0 00:07:54.711 }, 00:07:54.711 "claimed": true, 00:07:54.711 "claim_type": "exclusive_write", 00:07:54.711 "zoned": false, 00:07:54.711 "supported_io_types": { 00:07:54.711 "read": true, 00:07:54.711 "write": true, 00:07:54.711 "unmap": true, 00:07:54.711 "flush": true, 00:07:54.711 "reset": true, 00:07:54.711 "nvme_admin": false, 00:07:54.711 "nvme_io": false, 00:07:54.711 "nvme_io_md": false, 00:07:54.711 "write_zeroes": true, 00:07:54.711 "zcopy": true, 00:07:54.711 "get_zone_info": false, 00:07:54.711 "zone_management": false, 00:07:54.711 "zone_append": false, 00:07:54.711 "compare": false, 00:07:54.711 "compare_and_write": false, 00:07:54.711 "abort": true, 00:07:54.711 "seek_hole": false, 00:07:54.711 "seek_data": false, 00:07:54.711 "copy": true, 00:07:54.711 "nvme_iov_md": false 00:07:54.711 }, 00:07:54.711 "memory_domains": [ 00:07:54.711 { 00:07:54.711 "dma_device_id": "system", 00:07:54.711 "dma_device_type": 1 00:07:54.711 }, 00:07:54.711 { 00:07:54.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.711 "dma_device_type": 2 00:07:54.711 } 00:07:54.711 ], 00:07:54.711 "driver_specific": {} 00:07:54.711 } 00:07:54.711 ] 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.711 "name": "Existed_Raid", 00:07:54.711 "uuid": "0fbf9233-af99-4f50-88ba-b14064720182", 00:07:54.711 "strip_size_kb": 64, 00:07:54.711 "state": "online", 00:07:54.711 "raid_level": "raid0", 00:07:54.711 "superblock": true, 00:07:54.711 "num_base_bdevs": 2, 00:07:54.711 "num_base_bdevs_discovered": 2, 00:07:54.711 "num_base_bdevs_operational": 2, 00:07:54.711 "base_bdevs_list": [ 00:07:54.711 { 00:07:54.711 "name": "BaseBdev1", 00:07:54.711 "uuid": "73e60f51-31ab-4bba-84e3-87c607f74fcb", 00:07:54.711 "is_configured": true, 00:07:54.711 "data_offset": 2048, 00:07:54.711 "data_size": 63488 00:07:54.711 }, 00:07:54.711 { 00:07:54.711 "name": "BaseBdev2", 00:07:54.711 "uuid": "f4bce659-cabb-45ce-82b7-84eaa70accc6", 00:07:54.711 "is_configured": true, 00:07:54.711 "data_offset": 2048, 00:07:54.711 "data_size": 63488 00:07:54.711 } 00:07:54.711 ] 00:07:54.711 }' 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.711 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:54.971 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:54.971 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.971 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.971 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.971 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.971 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:54.971 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.971 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.971 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 [2024-11-29 07:39:44.860742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.971 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.971 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.971 "name": "Existed_Raid", 00:07:54.971 "aliases": [ 00:07:54.971 "0fbf9233-af99-4f50-88ba-b14064720182" 00:07:54.971 ], 00:07:54.971 "product_name": "Raid Volume", 00:07:54.971 "block_size": 512, 00:07:54.971 "num_blocks": 126976, 00:07:54.971 "uuid": "0fbf9233-af99-4f50-88ba-b14064720182", 00:07:54.971 "assigned_rate_limits": { 00:07:54.971 "rw_ios_per_sec": 0, 00:07:54.971 "rw_mbytes_per_sec": 0, 00:07:54.971 "r_mbytes_per_sec": 0, 00:07:54.971 "w_mbytes_per_sec": 0 00:07:54.971 }, 00:07:54.971 "claimed": false, 00:07:54.971 "zoned": false, 00:07:54.971 "supported_io_types": { 00:07:54.971 "read": true, 00:07:54.971 "write": true, 00:07:54.971 "unmap": true, 00:07:54.971 "flush": true, 00:07:54.971 "reset": true, 00:07:54.971 "nvme_admin": false, 00:07:54.971 "nvme_io": false, 00:07:54.971 "nvme_io_md": false, 00:07:54.971 "write_zeroes": true, 00:07:54.971 "zcopy": false, 00:07:54.971 "get_zone_info": false, 00:07:54.971 "zone_management": false, 00:07:54.971 "zone_append": false, 00:07:54.971 "compare": false, 00:07:54.971 "compare_and_write": false, 00:07:54.971 "abort": false, 00:07:54.971 "seek_hole": false, 00:07:54.971 "seek_data": false, 00:07:54.971 "copy": false, 00:07:54.971 "nvme_iov_md": false 00:07:54.971 }, 00:07:54.971 "memory_domains": [ 00:07:54.971 { 00:07:54.971 "dma_device_id": "system", 00:07:54.971 "dma_device_type": 1 00:07:54.971 }, 00:07:54.971 { 00:07:54.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.971 "dma_device_type": 2 00:07:54.971 }, 00:07:54.971 { 00:07:54.971 "dma_device_id": "system", 00:07:54.971 "dma_device_type": 1 00:07:54.971 }, 00:07:54.971 { 00:07:54.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.971 "dma_device_type": 2 00:07:54.971 } 00:07:54.971 ], 00:07:54.971 "driver_specific": { 00:07:54.971 "raid": { 00:07:54.971 "uuid": "0fbf9233-af99-4f50-88ba-b14064720182", 00:07:54.971 "strip_size_kb": 64, 00:07:54.971 "state": "online", 00:07:54.971 "raid_level": "raid0", 00:07:54.971 "superblock": true, 00:07:54.971 "num_base_bdevs": 2, 00:07:54.971 "num_base_bdevs_discovered": 2, 00:07:54.971 "num_base_bdevs_operational": 2, 00:07:54.971 "base_bdevs_list": [ 00:07:54.971 { 00:07:54.971 "name": "BaseBdev1", 00:07:54.971 "uuid": "73e60f51-31ab-4bba-84e3-87c607f74fcb", 00:07:54.971 "is_configured": true, 00:07:54.971 "data_offset": 2048, 00:07:54.971 "data_size": 63488 00:07:54.971 }, 00:07:54.971 { 00:07:54.971 "name": "BaseBdev2", 00:07:54.971 "uuid": "f4bce659-cabb-45ce-82b7-84eaa70accc6", 00:07:54.971 "is_configured": true, 00:07:54.971 "data_offset": 2048, 00:07:54.971 "data_size": 63488 00:07:54.971 } 00:07:54.971 ] 00:07:54.971 } 00:07:54.971 } 00:07:54.971 }' 00:07:54.972 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.232 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:55.232 BaseBdev2' 00:07:55.232 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.232 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:55.232 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.232 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:55.232 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.232 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.232 07:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.232 07:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.232 [2024-11-29 07:39:45.072165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:55.232 [2024-11-29 07:39:45.072195] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.232 [2024-11-29 07:39:45.072243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.232 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.492 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.492 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.492 "name": "Existed_Raid", 00:07:55.492 "uuid": "0fbf9233-af99-4f50-88ba-b14064720182", 00:07:55.492 "strip_size_kb": 64, 00:07:55.492 "state": "offline", 00:07:55.492 "raid_level": "raid0", 00:07:55.492 "superblock": true, 00:07:55.492 "num_base_bdevs": 2, 00:07:55.492 "num_base_bdevs_discovered": 1, 00:07:55.492 "num_base_bdevs_operational": 1, 00:07:55.492 "base_bdevs_list": [ 00:07:55.492 { 00:07:55.492 "name": null, 00:07:55.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.492 "is_configured": false, 00:07:55.492 "data_offset": 0, 00:07:55.492 "data_size": 63488 00:07:55.492 }, 00:07:55.492 { 00:07:55.492 "name": "BaseBdev2", 00:07:55.492 "uuid": "f4bce659-cabb-45ce-82b7-84eaa70accc6", 00:07:55.492 "is_configured": true, 00:07:55.492 "data_offset": 2048, 00:07:55.492 "data_size": 63488 00:07:55.492 } 00:07:55.492 ] 00:07:55.492 }' 00:07:55.492 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.492 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.752 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:55.752 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:55.752 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.752 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.752 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.752 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:55.752 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.752 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:55.752 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:55.752 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:55.752 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.752 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.752 [2024-11-29 07:39:45.681797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:55.752 [2024-11-29 07:39:45.681896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60803 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60803 ']' 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60803 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60803 00:07:56.012 killing process with pid 60803 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60803' 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60803 00:07:56.012 [2024-11-29 07:39:45.869516] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.012 07:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60803 00:07:56.012 [2024-11-29 07:39:45.885236] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.393 07:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:57.393 00:07:57.393 real 0m4.800s 00:07:57.394 user 0m6.942s 00:07:57.394 sys 0m0.726s 00:07:57.394 07:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.394 07:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.394 ************************************ 00:07:57.394 END TEST raid_state_function_test_sb 00:07:57.394 ************************************ 00:07:57.394 07:39:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:57.394 07:39:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:57.394 07:39:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.394 07:39:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.394 ************************************ 00:07:57.394 START TEST raid_superblock_test 00:07:57.394 ************************************ 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61055 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61055 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61055 ']' 00:07:57.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.394 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.394 [2024-11-29 07:39:47.116683] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:57.394 [2024-11-29 07:39:47.116871] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61055 ] 00:07:57.394 [2024-11-29 07:39:47.287861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.654 [2024-11-29 07:39:47.391523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.654 [2024-11-29 07:39:47.582937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.654 [2024-11-29 07:39:47.583050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.224 malloc1 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.224 [2024-11-29 07:39:47.978428] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:58.224 [2024-11-29 07:39:47.978483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.224 [2024-11-29 07:39:47.978522] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:58.224 [2024-11-29 07:39:47.978531] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.224 [2024-11-29 07:39:47.980605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.224 [2024-11-29 07:39:47.980644] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:58.224 pt1 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.224 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.224 malloc2 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.224 [2024-11-29 07:39:48.030227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.224 [2024-11-29 07:39:48.030314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.224 [2024-11-29 07:39:48.030372] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:58.224 [2024-11-29 07:39:48.030400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.224 [2024-11-29 07:39:48.032404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.224 [2024-11-29 07:39:48.032470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.224 pt2 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.224 [2024-11-29 07:39:48.042254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:58.224 [2024-11-29 07:39:48.043999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.224 [2024-11-29 07:39:48.044214] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:58.224 [2024-11-29 07:39:48.044260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.224 [2024-11-29 07:39:48.044507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:58.224 [2024-11-29 07:39:48.044680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:58.224 [2024-11-29 07:39:48.044722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:58.224 [2024-11-29 07:39:48.044901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.224 "name": "raid_bdev1", 00:07:58.224 "uuid": "cd58a639-e664-4852-ac75-d49f587f2be7", 00:07:58.224 "strip_size_kb": 64, 00:07:58.224 "state": "online", 00:07:58.224 "raid_level": "raid0", 00:07:58.224 "superblock": true, 00:07:58.224 "num_base_bdevs": 2, 00:07:58.224 "num_base_bdevs_discovered": 2, 00:07:58.224 "num_base_bdevs_operational": 2, 00:07:58.224 "base_bdevs_list": [ 00:07:58.224 { 00:07:58.224 "name": "pt1", 00:07:58.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.224 "is_configured": true, 00:07:58.224 "data_offset": 2048, 00:07:58.224 "data_size": 63488 00:07:58.224 }, 00:07:58.224 { 00:07:58.224 "name": "pt2", 00:07:58.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.224 "is_configured": true, 00:07:58.224 "data_offset": 2048, 00:07:58.224 "data_size": 63488 00:07:58.224 } 00:07:58.224 ] 00:07:58.224 }' 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.224 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.796 [2024-11-29 07:39:48.473725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.796 "name": "raid_bdev1", 00:07:58.796 "aliases": [ 00:07:58.796 "cd58a639-e664-4852-ac75-d49f587f2be7" 00:07:58.796 ], 00:07:58.796 "product_name": "Raid Volume", 00:07:58.796 "block_size": 512, 00:07:58.796 "num_blocks": 126976, 00:07:58.796 "uuid": "cd58a639-e664-4852-ac75-d49f587f2be7", 00:07:58.796 "assigned_rate_limits": { 00:07:58.796 "rw_ios_per_sec": 0, 00:07:58.796 "rw_mbytes_per_sec": 0, 00:07:58.796 "r_mbytes_per_sec": 0, 00:07:58.796 "w_mbytes_per_sec": 0 00:07:58.796 }, 00:07:58.796 "claimed": false, 00:07:58.796 "zoned": false, 00:07:58.796 "supported_io_types": { 00:07:58.796 "read": true, 00:07:58.796 "write": true, 00:07:58.796 "unmap": true, 00:07:58.796 "flush": true, 00:07:58.796 "reset": true, 00:07:58.796 "nvme_admin": false, 00:07:58.796 "nvme_io": false, 00:07:58.796 "nvme_io_md": false, 00:07:58.796 "write_zeroes": true, 00:07:58.796 "zcopy": false, 00:07:58.796 "get_zone_info": false, 00:07:58.796 "zone_management": false, 00:07:58.796 "zone_append": false, 00:07:58.796 "compare": false, 00:07:58.796 "compare_and_write": false, 00:07:58.796 "abort": false, 00:07:58.796 "seek_hole": false, 00:07:58.796 "seek_data": false, 00:07:58.796 "copy": false, 00:07:58.796 "nvme_iov_md": false 00:07:58.796 }, 00:07:58.796 "memory_domains": [ 00:07:58.796 { 00:07:58.796 "dma_device_id": "system", 00:07:58.796 "dma_device_type": 1 00:07:58.796 }, 00:07:58.796 { 00:07:58.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.796 "dma_device_type": 2 00:07:58.796 }, 00:07:58.796 { 00:07:58.796 "dma_device_id": "system", 00:07:58.796 "dma_device_type": 1 00:07:58.796 }, 00:07:58.796 { 00:07:58.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.796 "dma_device_type": 2 00:07:58.796 } 00:07:58.796 ], 00:07:58.796 "driver_specific": { 00:07:58.796 "raid": { 00:07:58.796 "uuid": "cd58a639-e664-4852-ac75-d49f587f2be7", 00:07:58.796 "strip_size_kb": 64, 00:07:58.796 "state": "online", 00:07:58.796 "raid_level": "raid0", 00:07:58.796 "superblock": true, 00:07:58.796 "num_base_bdevs": 2, 00:07:58.796 "num_base_bdevs_discovered": 2, 00:07:58.796 "num_base_bdevs_operational": 2, 00:07:58.796 "base_bdevs_list": [ 00:07:58.796 { 00:07:58.796 "name": "pt1", 00:07:58.796 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.796 "is_configured": true, 00:07:58.796 "data_offset": 2048, 00:07:58.796 "data_size": 63488 00:07:58.796 }, 00:07:58.796 { 00:07:58.796 "name": "pt2", 00:07:58.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.796 "is_configured": true, 00:07:58.796 "data_offset": 2048, 00:07:58.796 "data_size": 63488 00:07:58.796 } 00:07:58.796 ] 00:07:58.796 } 00:07:58.796 } 00:07:58.796 }' 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:58.796 pt2' 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:58.796 [2024-11-29 07:39:48.685345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cd58a639-e664-4852-ac75-d49f587f2be7 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cd58a639-e664-4852-ac75-d49f587f2be7 ']' 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.796 [2024-11-29 07:39:48.732993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.796 [2024-11-29 07:39:48.733019] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.796 [2024-11-29 07:39:48.733093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.796 [2024-11-29 07:39:48.733145] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.796 [2024-11-29 07:39:48.733156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:58.796 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.057 [2024-11-29 07:39:48.852845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:59.057 [2024-11-29 07:39:48.854703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:59.057 [2024-11-29 07:39:48.854768] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:59.057 [2024-11-29 07:39:48.854806] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:59.057 [2024-11-29 07:39:48.854820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.057 [2024-11-29 07:39:48.854841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:59.057 request: 00:07:59.057 { 00:07:59.057 "name": "raid_bdev1", 00:07:59.057 "raid_level": "raid0", 00:07:59.057 "base_bdevs": [ 00:07:59.057 "malloc1", 00:07:59.057 "malloc2" 00:07:59.057 ], 00:07:59.057 "strip_size_kb": 64, 00:07:59.057 "superblock": false, 00:07:59.057 "method": "bdev_raid_create", 00:07:59.057 "req_id": 1 00:07:59.057 } 00:07:59.057 Got JSON-RPC error response 00:07:59.057 response: 00:07:59.057 { 00:07:59.057 "code": -17, 00:07:59.057 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:59.057 } 00:07:59.057 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.058 [2024-11-29 07:39:48.916709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:59.058 [2024-11-29 07:39:48.916759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.058 [2024-11-29 07:39:48.916792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:59.058 [2024-11-29 07:39:48.916804] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.058 [2024-11-29 07:39:48.919158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.058 [2024-11-29 07:39:48.919196] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:59.058 [2024-11-29 07:39:48.919279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:59.058 [2024-11-29 07:39:48.919347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:59.058 pt1 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.058 "name": "raid_bdev1", 00:07:59.058 "uuid": "cd58a639-e664-4852-ac75-d49f587f2be7", 00:07:59.058 "strip_size_kb": 64, 00:07:59.058 "state": "configuring", 00:07:59.058 "raid_level": "raid0", 00:07:59.058 "superblock": true, 00:07:59.058 "num_base_bdevs": 2, 00:07:59.058 "num_base_bdevs_discovered": 1, 00:07:59.058 "num_base_bdevs_operational": 2, 00:07:59.058 "base_bdevs_list": [ 00:07:59.058 { 00:07:59.058 "name": "pt1", 00:07:59.058 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.058 "is_configured": true, 00:07:59.058 "data_offset": 2048, 00:07:59.058 "data_size": 63488 00:07:59.058 }, 00:07:59.058 { 00:07:59.058 "name": null, 00:07:59.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.058 "is_configured": false, 00:07:59.058 "data_offset": 2048, 00:07:59.058 "data_size": 63488 00:07:59.058 } 00:07:59.058 ] 00:07:59.058 }' 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.058 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.635 [2024-11-29 07:39:49.359978] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:59.635 [2024-11-29 07:39:49.360051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.635 [2024-11-29 07:39:49.360072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:59.635 [2024-11-29 07:39:49.360083] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.635 [2024-11-29 07:39:49.360576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.635 [2024-11-29 07:39:49.360607] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:59.635 [2024-11-29 07:39:49.360691] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:59.635 [2024-11-29 07:39:49.360726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:59.635 [2024-11-29 07:39:49.360848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:59.635 [2024-11-29 07:39:49.360866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:59.635 [2024-11-29 07:39:49.361118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:59.635 [2024-11-29 07:39:49.361271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:59.635 [2024-11-29 07:39:49.361286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:59.635 [2024-11-29 07:39:49.361437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.635 pt2 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.635 "name": "raid_bdev1", 00:07:59.635 "uuid": "cd58a639-e664-4852-ac75-d49f587f2be7", 00:07:59.635 "strip_size_kb": 64, 00:07:59.635 "state": "online", 00:07:59.635 "raid_level": "raid0", 00:07:59.635 "superblock": true, 00:07:59.635 "num_base_bdevs": 2, 00:07:59.635 "num_base_bdevs_discovered": 2, 00:07:59.635 "num_base_bdevs_operational": 2, 00:07:59.635 "base_bdevs_list": [ 00:07:59.635 { 00:07:59.635 "name": "pt1", 00:07:59.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.635 "is_configured": true, 00:07:59.635 "data_offset": 2048, 00:07:59.635 "data_size": 63488 00:07:59.635 }, 00:07:59.635 { 00:07:59.635 "name": "pt2", 00:07:59.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.635 "is_configured": true, 00:07:59.635 "data_offset": 2048, 00:07:59.635 "data_size": 63488 00:07:59.635 } 00:07:59.635 ] 00:07:59.635 }' 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.635 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.895 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:59.895 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:59.895 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.895 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.895 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.895 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.895 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.895 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:59.896 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.896 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.896 [2024-11-29 07:39:49.823417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:00.156 "name": "raid_bdev1", 00:08:00.156 "aliases": [ 00:08:00.156 "cd58a639-e664-4852-ac75-d49f587f2be7" 00:08:00.156 ], 00:08:00.156 "product_name": "Raid Volume", 00:08:00.156 "block_size": 512, 00:08:00.156 "num_blocks": 126976, 00:08:00.156 "uuid": "cd58a639-e664-4852-ac75-d49f587f2be7", 00:08:00.156 "assigned_rate_limits": { 00:08:00.156 "rw_ios_per_sec": 0, 00:08:00.156 "rw_mbytes_per_sec": 0, 00:08:00.156 "r_mbytes_per_sec": 0, 00:08:00.156 "w_mbytes_per_sec": 0 00:08:00.156 }, 00:08:00.156 "claimed": false, 00:08:00.156 "zoned": false, 00:08:00.156 "supported_io_types": { 00:08:00.156 "read": true, 00:08:00.156 "write": true, 00:08:00.156 "unmap": true, 00:08:00.156 "flush": true, 00:08:00.156 "reset": true, 00:08:00.156 "nvme_admin": false, 00:08:00.156 "nvme_io": false, 00:08:00.156 "nvme_io_md": false, 00:08:00.156 "write_zeroes": true, 00:08:00.156 "zcopy": false, 00:08:00.156 "get_zone_info": false, 00:08:00.156 "zone_management": false, 00:08:00.156 "zone_append": false, 00:08:00.156 "compare": false, 00:08:00.156 "compare_and_write": false, 00:08:00.156 "abort": false, 00:08:00.156 "seek_hole": false, 00:08:00.156 "seek_data": false, 00:08:00.156 "copy": false, 00:08:00.156 "nvme_iov_md": false 00:08:00.156 }, 00:08:00.156 "memory_domains": [ 00:08:00.156 { 00:08:00.156 "dma_device_id": "system", 00:08:00.156 "dma_device_type": 1 00:08:00.156 }, 00:08:00.156 { 00:08:00.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.156 "dma_device_type": 2 00:08:00.156 }, 00:08:00.156 { 00:08:00.156 "dma_device_id": "system", 00:08:00.156 "dma_device_type": 1 00:08:00.156 }, 00:08:00.156 { 00:08:00.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.156 "dma_device_type": 2 00:08:00.156 } 00:08:00.156 ], 00:08:00.156 "driver_specific": { 00:08:00.156 "raid": { 00:08:00.156 "uuid": "cd58a639-e664-4852-ac75-d49f587f2be7", 00:08:00.156 "strip_size_kb": 64, 00:08:00.156 "state": "online", 00:08:00.156 "raid_level": "raid0", 00:08:00.156 "superblock": true, 00:08:00.156 "num_base_bdevs": 2, 00:08:00.156 "num_base_bdevs_discovered": 2, 00:08:00.156 "num_base_bdevs_operational": 2, 00:08:00.156 "base_bdevs_list": [ 00:08:00.156 { 00:08:00.156 "name": "pt1", 00:08:00.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:00.156 "is_configured": true, 00:08:00.156 "data_offset": 2048, 00:08:00.156 "data_size": 63488 00:08:00.156 }, 00:08:00.156 { 00:08:00.156 "name": "pt2", 00:08:00.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.156 "is_configured": true, 00:08:00.156 "data_offset": 2048, 00:08:00.156 "data_size": 63488 00:08:00.156 } 00:08:00.156 ] 00:08:00.156 } 00:08:00.156 } 00:08:00.156 }' 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:00.156 pt2' 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.156 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.156 [2024-11-29 07:39:50.042977] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cd58a639-e664-4852-ac75-d49f587f2be7 '!=' cd58a639-e664-4852-ac75-d49f587f2be7 ']' 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61055 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61055 ']' 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61055 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.156 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61055 00:08:00.416 killing process with pid 61055 00:08:00.416 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.416 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.416 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61055' 00:08:00.416 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61055 00:08:00.416 [2024-11-29 07:39:50.127040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.416 [2024-11-29 07:39:50.127128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.416 [2024-11-29 07:39:50.127177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.416 [2024-11-29 07:39:50.127188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:00.416 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61055 00:08:00.416 [2024-11-29 07:39:50.320832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.799 07:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:01.799 00:08:01.799 real 0m4.359s 00:08:01.800 user 0m6.130s 00:08:01.800 sys 0m0.714s 00:08:01.800 07:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.800 07:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.800 ************************************ 00:08:01.800 END TEST raid_superblock_test 00:08:01.800 ************************************ 00:08:01.800 07:39:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:01.800 07:39:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:01.800 07:39:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.800 07:39:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.800 ************************************ 00:08:01.800 START TEST raid_read_error_test 00:08:01.800 ************************************ 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WvqJ9Tgpl0 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61261 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61261 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61261 ']' 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.800 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.800 [2024-11-29 07:39:51.560973] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:01.800 [2024-11-29 07:39:51.561075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61261 ] 00:08:01.800 [2024-11-29 07:39:51.733558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.061 [2024-11-29 07:39:51.841856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.321 [2024-11-29 07:39:52.028757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.321 [2024-11-29 07:39:52.028823] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 BaseBdev1_malloc 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 true 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 [2024-11-29 07:39:52.430016] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:02.582 [2024-11-29 07:39:52.430079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.582 [2024-11-29 07:39:52.430097] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:02.582 [2024-11-29 07:39:52.430108] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.582 [2024-11-29 07:39:52.432078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.582 [2024-11-29 07:39:52.432126] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:02.582 BaseBdev1 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 BaseBdev2_malloc 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 true 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 [2024-11-29 07:39:52.495605] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:02.582 [2024-11-29 07:39:52.495665] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.582 [2024-11-29 07:39:52.495682] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:02.582 [2024-11-29 07:39:52.495692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.582 [2024-11-29 07:39:52.497656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.582 [2024-11-29 07:39:52.497689] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:02.582 BaseBdev2 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 [2024-11-29 07:39:52.507642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.582 [2024-11-29 07:39:52.509431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.582 [2024-11-29 07:39:52.509620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:02.582 [2024-11-29 07:39:52.509636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:02.582 [2024-11-29 07:39:52.509870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:02.582 [2024-11-29 07:39:52.510045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:02.582 [2024-11-29 07:39:52.510066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:02.582 [2024-11-29 07:39:52.510208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.582 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.583 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.583 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.583 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.583 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.583 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.843 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.843 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.843 "name": "raid_bdev1", 00:08:02.843 "uuid": "8ba68e9e-d3e1-4afe-858a-1c009d663443", 00:08:02.843 "strip_size_kb": 64, 00:08:02.843 "state": "online", 00:08:02.843 "raid_level": "raid0", 00:08:02.843 "superblock": true, 00:08:02.843 "num_base_bdevs": 2, 00:08:02.843 "num_base_bdevs_discovered": 2, 00:08:02.843 "num_base_bdevs_operational": 2, 00:08:02.843 "base_bdevs_list": [ 00:08:02.843 { 00:08:02.843 "name": "BaseBdev1", 00:08:02.843 "uuid": "f976ef0e-bf57-54ca-b19f-6dc9f1963ca2", 00:08:02.843 "is_configured": true, 00:08:02.843 "data_offset": 2048, 00:08:02.843 "data_size": 63488 00:08:02.843 }, 00:08:02.843 { 00:08:02.843 "name": "BaseBdev2", 00:08:02.843 "uuid": "6391ee21-5e61-53ad-8b64-edbbf15181fe", 00:08:02.843 "is_configured": true, 00:08:02.843 "data_offset": 2048, 00:08:02.843 "data_size": 63488 00:08:02.843 } 00:08:02.843 ] 00:08:02.843 }' 00:08:02.843 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.843 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.103 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:03.103 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:03.103 [2024-11-29 07:39:52.979998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:04.044 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:04.044 07:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.045 "name": "raid_bdev1", 00:08:04.045 "uuid": "8ba68e9e-d3e1-4afe-858a-1c009d663443", 00:08:04.045 "strip_size_kb": 64, 00:08:04.045 "state": "online", 00:08:04.045 "raid_level": "raid0", 00:08:04.045 "superblock": true, 00:08:04.045 "num_base_bdevs": 2, 00:08:04.045 "num_base_bdevs_discovered": 2, 00:08:04.045 "num_base_bdevs_operational": 2, 00:08:04.045 "base_bdevs_list": [ 00:08:04.045 { 00:08:04.045 "name": "BaseBdev1", 00:08:04.045 "uuid": "f976ef0e-bf57-54ca-b19f-6dc9f1963ca2", 00:08:04.045 "is_configured": true, 00:08:04.045 "data_offset": 2048, 00:08:04.045 "data_size": 63488 00:08:04.045 }, 00:08:04.045 { 00:08:04.045 "name": "BaseBdev2", 00:08:04.045 "uuid": "6391ee21-5e61-53ad-8b64-edbbf15181fe", 00:08:04.045 "is_configured": true, 00:08:04.045 "data_offset": 2048, 00:08:04.045 "data_size": 63488 00:08:04.045 } 00:08:04.045 ] 00:08:04.045 }' 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.045 07:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.621 07:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.621 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.621 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.621 [2024-11-29 07:39:54.313692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.621 [2024-11-29 07:39:54.313731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.621 [2024-11-29 07:39:54.316327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.621 [2024-11-29 07:39:54.316371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.621 [2024-11-29 07:39:54.316402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.621 [2024-11-29 07:39:54.316413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:04.621 { 00:08:04.621 "results": [ 00:08:04.621 { 00:08:04.621 "job": "raid_bdev1", 00:08:04.621 "core_mask": "0x1", 00:08:04.621 "workload": "randrw", 00:08:04.621 "percentage": 50, 00:08:04.621 "status": "finished", 00:08:04.621 "queue_depth": 1, 00:08:04.621 "io_size": 131072, 00:08:04.621 "runtime": 1.334673, 00:08:04.621 "iops": 16610.81028836277, 00:08:04.621 "mibps": 2076.351286045346, 00:08:04.621 "io_failed": 1, 00:08:04.621 "io_timeout": 0, 00:08:04.621 "avg_latency_us": 83.25586116172451, 00:08:04.621 "min_latency_us": 24.817467248908297, 00:08:04.621 "max_latency_us": 1445.2262008733624 00:08:04.621 } 00:08:04.621 ], 00:08:04.621 "core_count": 1 00:08:04.621 } 00:08:04.621 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.621 07:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61261 00:08:04.622 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61261 ']' 00:08:04.622 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61261 00:08:04.622 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:04.622 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.622 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61261 00:08:04.622 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.622 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.622 killing process with pid 61261 00:08:04.622 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61261' 00:08:04.622 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61261 00:08:04.622 [2024-11-29 07:39:54.359783] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.622 07:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61261 00:08:04.622 [2024-11-29 07:39:54.489251] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.003 07:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:06.003 07:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WvqJ9Tgpl0 00:08:06.003 07:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:06.003 07:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:06.003 07:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:06.003 07:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.003 07:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:06.003 07:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:06.003 00:08:06.003 real 0m4.167s 00:08:06.003 user 0m4.927s 00:08:06.003 sys 0m0.519s 00:08:06.003 07:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.003 07:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.003 ************************************ 00:08:06.003 END TEST raid_read_error_test 00:08:06.003 ************************************ 00:08:06.003 07:39:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:06.003 07:39:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.003 07:39:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.003 07:39:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.003 ************************************ 00:08:06.003 START TEST raid_write_error_test 00:08:06.003 ************************************ 00:08:06.003 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:08:06.003 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:06.003 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MgvPJFxKMt 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61407 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61407 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61407 ']' 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.004 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.004 [2024-11-29 07:39:55.792886] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:06.004 [2024-11-29 07:39:55.793014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61407 ] 00:08:06.265 [2024-11-29 07:39:55.966887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.265 [2024-11-29 07:39:56.071307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.526 [2024-11-29 07:39:56.261982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.526 [2024-11-29 07:39:56.262044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.787 BaseBdev1_malloc 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.787 true 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.787 [2024-11-29 07:39:56.658342] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:06.787 [2024-11-29 07:39:56.658409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.787 [2024-11-29 07:39:56.658426] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:06.787 [2024-11-29 07:39:56.658437] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.787 [2024-11-29 07:39:56.660401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.787 [2024-11-29 07:39:56.660440] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:06.787 BaseBdev1 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.787 BaseBdev2_malloc 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.787 true 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.787 [2024-11-29 07:39:56.724554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:06.787 [2024-11-29 07:39:56.724604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.787 [2024-11-29 07:39:56.724636] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:06.787 [2024-11-29 07:39:56.724646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.787 [2024-11-29 07:39:56.726695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.787 [2024-11-29 07:39:56.726733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:06.787 BaseBdev2 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.787 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.048 [2024-11-29 07:39:56.736600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.048 [2024-11-29 07:39:56.738346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.048 [2024-11-29 07:39:56.738552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:07.048 [2024-11-29 07:39:56.738567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:07.048 [2024-11-29 07:39:56.738799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:07.048 [2024-11-29 07:39:56.738957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:07.048 [2024-11-29 07:39:56.738977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:07.048 [2024-11-29 07:39:56.739127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.048 "name": "raid_bdev1", 00:08:07.048 "uuid": "0b2c3c0d-c1fa-4366-9a36-da9b57915928", 00:08:07.048 "strip_size_kb": 64, 00:08:07.048 "state": "online", 00:08:07.048 "raid_level": "raid0", 00:08:07.048 "superblock": true, 00:08:07.048 "num_base_bdevs": 2, 00:08:07.048 "num_base_bdevs_discovered": 2, 00:08:07.048 "num_base_bdevs_operational": 2, 00:08:07.048 "base_bdevs_list": [ 00:08:07.048 { 00:08:07.048 "name": "BaseBdev1", 00:08:07.048 "uuid": "38d31d0f-532e-5139-a1d9-aa7aeb17b6f2", 00:08:07.048 "is_configured": true, 00:08:07.048 "data_offset": 2048, 00:08:07.048 "data_size": 63488 00:08:07.048 }, 00:08:07.048 { 00:08:07.048 "name": "BaseBdev2", 00:08:07.048 "uuid": "22df21d7-4106-55f2-ae89-146138f91a62", 00:08:07.048 "is_configured": true, 00:08:07.048 "data_offset": 2048, 00:08:07.048 "data_size": 63488 00:08:07.048 } 00:08:07.048 ] 00:08:07.048 }' 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.048 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.308 07:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:07.308 07:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:07.308 [2024-11-29 07:39:57.221052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.247 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.507 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.507 "name": "raid_bdev1", 00:08:08.507 "uuid": "0b2c3c0d-c1fa-4366-9a36-da9b57915928", 00:08:08.507 "strip_size_kb": 64, 00:08:08.507 "state": "online", 00:08:08.507 "raid_level": "raid0", 00:08:08.507 "superblock": true, 00:08:08.507 "num_base_bdevs": 2, 00:08:08.507 "num_base_bdevs_discovered": 2, 00:08:08.507 "num_base_bdevs_operational": 2, 00:08:08.507 "base_bdevs_list": [ 00:08:08.507 { 00:08:08.507 "name": "BaseBdev1", 00:08:08.507 "uuid": "38d31d0f-532e-5139-a1d9-aa7aeb17b6f2", 00:08:08.507 "is_configured": true, 00:08:08.507 "data_offset": 2048, 00:08:08.507 "data_size": 63488 00:08:08.507 }, 00:08:08.507 { 00:08:08.507 "name": "BaseBdev2", 00:08:08.507 "uuid": "22df21d7-4106-55f2-ae89-146138f91a62", 00:08:08.507 "is_configured": true, 00:08:08.507 "data_offset": 2048, 00:08:08.507 "data_size": 63488 00:08:08.507 } 00:08:08.507 ] 00:08:08.507 }' 00:08:08.507 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.507 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.767 [2024-11-29 07:39:58.548455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.767 [2024-11-29 07:39:58.548494] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.767 [2024-11-29 07:39:58.551190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.767 [2024-11-29 07:39:58.551236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.767 [2024-11-29 07:39:58.551271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.767 [2024-11-29 07:39:58.551282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:08.767 { 00:08:08.767 "results": [ 00:08:08.767 { 00:08:08.767 "job": "raid_bdev1", 00:08:08.767 "core_mask": "0x1", 00:08:08.767 "workload": "randrw", 00:08:08.767 "percentage": 50, 00:08:08.767 "status": "finished", 00:08:08.767 "queue_depth": 1, 00:08:08.767 "io_size": 131072, 00:08:08.767 "runtime": 1.328353, 00:08:08.767 "iops": 16493.356811028392, 00:08:08.767 "mibps": 2061.669601378549, 00:08:08.767 "io_failed": 1, 00:08:08.767 "io_timeout": 0, 00:08:08.767 "avg_latency_us": 83.83014116901415, 00:08:08.767 "min_latency_us": 24.817467248908297, 00:08:08.767 "max_latency_us": 1352.216593886463 00:08:08.767 } 00:08:08.767 ], 00:08:08.767 "core_count": 1 00:08:08.767 } 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61407 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61407 ']' 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61407 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61407 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.767 killing process with pid 61407 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61407' 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61407 00:08:08.767 [2024-11-29 07:39:58.594279] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.767 07:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61407 00:08:09.027 [2024-11-29 07:39:58.721384] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.967 07:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MgvPJFxKMt 00:08:09.967 07:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:09.967 07:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:09.967 07:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:09.967 07:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:09.967 07:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.967 07:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.967 07:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:09.967 00:08:09.967 real 0m4.159s 00:08:09.967 user 0m4.929s 00:08:09.967 sys 0m0.519s 00:08:09.967 07:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.967 07:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.967 ************************************ 00:08:09.967 END TEST raid_write_error_test 00:08:09.967 ************************************ 00:08:09.967 07:39:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:09.967 07:39:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:09.967 07:39:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:09.967 07:39:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.967 07:39:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.227 ************************************ 00:08:10.227 START TEST raid_state_function_test 00:08:10.227 ************************************ 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61545 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:10.227 Process raid pid: 61545 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61545' 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61545 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61545 ']' 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.227 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.227 [2024-11-29 07:40:00.013518] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:10.227 [2024-11-29 07:40:00.013647] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.487 [2024-11-29 07:40:00.179789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.487 [2024-11-29 07:40:00.283263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.746 [2024-11-29 07:40:00.479833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.746 [2024-11-29 07:40:00.479871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.005 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.005 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:11.005 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:11.005 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.005 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.005 [2024-11-29 07:40:00.830952] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.006 [2024-11-29 07:40:00.831007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.006 [2024-11-29 07:40:00.831017] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.006 [2024-11-29 07:40:00.831027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.006 "name": "Existed_Raid", 00:08:11.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.006 "strip_size_kb": 64, 00:08:11.006 "state": "configuring", 00:08:11.006 "raid_level": "concat", 00:08:11.006 "superblock": false, 00:08:11.006 "num_base_bdevs": 2, 00:08:11.006 "num_base_bdevs_discovered": 0, 00:08:11.006 "num_base_bdevs_operational": 2, 00:08:11.006 "base_bdevs_list": [ 00:08:11.006 { 00:08:11.006 "name": "BaseBdev1", 00:08:11.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.006 "is_configured": false, 00:08:11.006 "data_offset": 0, 00:08:11.006 "data_size": 0 00:08:11.006 }, 00:08:11.006 { 00:08:11.006 "name": "BaseBdev2", 00:08:11.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.006 "is_configured": false, 00:08:11.006 "data_offset": 0, 00:08:11.006 "data_size": 0 00:08:11.006 } 00:08:11.006 ] 00:08:11.006 }' 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.006 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.574 [2024-11-29 07:40:01.290133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.574 [2024-11-29 07:40:01.290173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.574 [2024-11-29 07:40:01.302110] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.574 [2024-11-29 07:40:01.302148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.574 [2024-11-29 07:40:01.302173] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.574 [2024-11-29 07:40:01.302184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.574 [2024-11-29 07:40:01.349066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.574 BaseBdev1 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.574 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.574 [ 00:08:11.574 { 00:08:11.574 "name": "BaseBdev1", 00:08:11.574 "aliases": [ 00:08:11.574 "7491a387-b2fa-4549-9a64-332042d13c9c" 00:08:11.574 ], 00:08:11.574 "product_name": "Malloc disk", 00:08:11.574 "block_size": 512, 00:08:11.574 "num_blocks": 65536, 00:08:11.574 "uuid": "7491a387-b2fa-4549-9a64-332042d13c9c", 00:08:11.574 "assigned_rate_limits": { 00:08:11.574 "rw_ios_per_sec": 0, 00:08:11.574 "rw_mbytes_per_sec": 0, 00:08:11.574 "r_mbytes_per_sec": 0, 00:08:11.574 "w_mbytes_per_sec": 0 00:08:11.574 }, 00:08:11.574 "claimed": true, 00:08:11.574 "claim_type": "exclusive_write", 00:08:11.574 "zoned": false, 00:08:11.574 "supported_io_types": { 00:08:11.574 "read": true, 00:08:11.574 "write": true, 00:08:11.574 "unmap": true, 00:08:11.574 "flush": true, 00:08:11.574 "reset": true, 00:08:11.574 "nvme_admin": false, 00:08:11.574 "nvme_io": false, 00:08:11.574 "nvme_io_md": false, 00:08:11.574 "write_zeroes": true, 00:08:11.574 "zcopy": true, 00:08:11.574 "get_zone_info": false, 00:08:11.574 "zone_management": false, 00:08:11.574 "zone_append": false, 00:08:11.574 "compare": false, 00:08:11.574 "compare_and_write": false, 00:08:11.574 "abort": true, 00:08:11.574 "seek_hole": false, 00:08:11.574 "seek_data": false, 00:08:11.574 "copy": true, 00:08:11.574 "nvme_iov_md": false 00:08:11.574 }, 00:08:11.574 "memory_domains": [ 00:08:11.574 { 00:08:11.574 "dma_device_id": "system", 00:08:11.575 "dma_device_type": 1 00:08:11.575 }, 00:08:11.575 { 00:08:11.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.575 "dma_device_type": 2 00:08:11.575 } 00:08:11.575 ], 00:08:11.575 "driver_specific": {} 00:08:11.575 } 00:08:11.575 ] 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.575 "name": "Existed_Raid", 00:08:11.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.575 "strip_size_kb": 64, 00:08:11.575 "state": "configuring", 00:08:11.575 "raid_level": "concat", 00:08:11.575 "superblock": false, 00:08:11.575 "num_base_bdevs": 2, 00:08:11.575 "num_base_bdevs_discovered": 1, 00:08:11.575 "num_base_bdevs_operational": 2, 00:08:11.575 "base_bdevs_list": [ 00:08:11.575 { 00:08:11.575 "name": "BaseBdev1", 00:08:11.575 "uuid": "7491a387-b2fa-4549-9a64-332042d13c9c", 00:08:11.575 "is_configured": true, 00:08:11.575 "data_offset": 0, 00:08:11.575 "data_size": 65536 00:08:11.575 }, 00:08:11.575 { 00:08:11.575 "name": "BaseBdev2", 00:08:11.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.575 "is_configured": false, 00:08:11.575 "data_offset": 0, 00:08:11.575 "data_size": 0 00:08:11.575 } 00:08:11.575 ] 00:08:11.575 }' 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.575 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.142 [2024-11-29 07:40:01.832254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:12.142 [2024-11-29 07:40:01.832346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.142 [2024-11-29 07:40:01.844276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.142 [2024-11-29 07:40:01.846146] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:12.142 [2024-11-29 07:40:01.846228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.142 "name": "Existed_Raid", 00:08:12.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.142 "strip_size_kb": 64, 00:08:12.142 "state": "configuring", 00:08:12.142 "raid_level": "concat", 00:08:12.142 "superblock": false, 00:08:12.142 "num_base_bdevs": 2, 00:08:12.142 "num_base_bdevs_discovered": 1, 00:08:12.142 "num_base_bdevs_operational": 2, 00:08:12.142 "base_bdevs_list": [ 00:08:12.142 { 00:08:12.142 "name": "BaseBdev1", 00:08:12.142 "uuid": "7491a387-b2fa-4549-9a64-332042d13c9c", 00:08:12.142 "is_configured": true, 00:08:12.142 "data_offset": 0, 00:08:12.142 "data_size": 65536 00:08:12.142 }, 00:08:12.142 { 00:08:12.142 "name": "BaseBdev2", 00:08:12.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.142 "is_configured": false, 00:08:12.142 "data_offset": 0, 00:08:12.142 "data_size": 0 00:08:12.142 } 00:08:12.142 ] 00:08:12.142 }' 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.142 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.401 [2024-11-29 07:40:02.324731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.401 [2024-11-29 07:40:02.324843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.401 [2024-11-29 07:40:02.324867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:12.401 [2024-11-29 07:40:02.325207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:12.401 [2024-11-29 07:40:02.325427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.401 [2024-11-29 07:40:02.325472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:12.401 [2024-11-29 07:40:02.325764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.401 BaseBdev2 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.401 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 [ 00:08:12.660 { 00:08:12.660 "name": "BaseBdev2", 00:08:12.660 "aliases": [ 00:08:12.660 "20e99536-8d99-4a73-974d-490466802e09" 00:08:12.660 ], 00:08:12.660 "product_name": "Malloc disk", 00:08:12.660 "block_size": 512, 00:08:12.660 "num_blocks": 65536, 00:08:12.660 "uuid": "20e99536-8d99-4a73-974d-490466802e09", 00:08:12.660 "assigned_rate_limits": { 00:08:12.660 "rw_ios_per_sec": 0, 00:08:12.660 "rw_mbytes_per_sec": 0, 00:08:12.660 "r_mbytes_per_sec": 0, 00:08:12.660 "w_mbytes_per_sec": 0 00:08:12.660 }, 00:08:12.660 "claimed": true, 00:08:12.660 "claim_type": "exclusive_write", 00:08:12.660 "zoned": false, 00:08:12.660 "supported_io_types": { 00:08:12.660 "read": true, 00:08:12.660 "write": true, 00:08:12.660 "unmap": true, 00:08:12.660 "flush": true, 00:08:12.660 "reset": true, 00:08:12.660 "nvme_admin": false, 00:08:12.660 "nvme_io": false, 00:08:12.660 "nvme_io_md": false, 00:08:12.660 "write_zeroes": true, 00:08:12.660 "zcopy": true, 00:08:12.660 "get_zone_info": false, 00:08:12.660 "zone_management": false, 00:08:12.660 "zone_append": false, 00:08:12.660 "compare": false, 00:08:12.660 "compare_and_write": false, 00:08:12.660 "abort": true, 00:08:12.660 "seek_hole": false, 00:08:12.660 "seek_data": false, 00:08:12.660 "copy": true, 00:08:12.660 "nvme_iov_md": false 00:08:12.660 }, 00:08:12.660 "memory_domains": [ 00:08:12.660 { 00:08:12.660 "dma_device_id": "system", 00:08:12.660 "dma_device_type": 1 00:08:12.660 }, 00:08:12.660 { 00:08:12.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.660 "dma_device_type": 2 00:08:12.660 } 00:08:12.660 ], 00:08:12.660 "driver_specific": {} 00:08:12.660 } 00:08:12.660 ] 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.660 "name": "Existed_Raid", 00:08:12.660 "uuid": "d4fb6f02-d470-4a18-a4e6-a49eb05a11e5", 00:08:12.660 "strip_size_kb": 64, 00:08:12.660 "state": "online", 00:08:12.660 "raid_level": "concat", 00:08:12.660 "superblock": false, 00:08:12.660 "num_base_bdevs": 2, 00:08:12.660 "num_base_bdevs_discovered": 2, 00:08:12.660 "num_base_bdevs_operational": 2, 00:08:12.660 "base_bdevs_list": [ 00:08:12.660 { 00:08:12.660 "name": "BaseBdev1", 00:08:12.660 "uuid": "7491a387-b2fa-4549-9a64-332042d13c9c", 00:08:12.660 "is_configured": true, 00:08:12.660 "data_offset": 0, 00:08:12.660 "data_size": 65536 00:08:12.660 }, 00:08:12.660 { 00:08:12.660 "name": "BaseBdev2", 00:08:12.660 "uuid": "20e99536-8d99-4a73-974d-490466802e09", 00:08:12.660 "is_configured": true, 00:08:12.660 "data_offset": 0, 00:08:12.660 "data_size": 65536 00:08:12.660 } 00:08:12.660 ] 00:08:12.660 }' 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.660 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.918 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:12.918 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:12.918 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.918 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.918 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.918 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.918 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:12.918 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.918 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.918 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.918 [2024-11-29 07:40:02.832198] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.918 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.178 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.178 "name": "Existed_Raid", 00:08:13.178 "aliases": [ 00:08:13.178 "d4fb6f02-d470-4a18-a4e6-a49eb05a11e5" 00:08:13.178 ], 00:08:13.178 "product_name": "Raid Volume", 00:08:13.178 "block_size": 512, 00:08:13.178 "num_blocks": 131072, 00:08:13.178 "uuid": "d4fb6f02-d470-4a18-a4e6-a49eb05a11e5", 00:08:13.178 "assigned_rate_limits": { 00:08:13.178 "rw_ios_per_sec": 0, 00:08:13.178 "rw_mbytes_per_sec": 0, 00:08:13.178 "r_mbytes_per_sec": 0, 00:08:13.178 "w_mbytes_per_sec": 0 00:08:13.178 }, 00:08:13.178 "claimed": false, 00:08:13.178 "zoned": false, 00:08:13.178 "supported_io_types": { 00:08:13.178 "read": true, 00:08:13.178 "write": true, 00:08:13.178 "unmap": true, 00:08:13.178 "flush": true, 00:08:13.178 "reset": true, 00:08:13.178 "nvme_admin": false, 00:08:13.178 "nvme_io": false, 00:08:13.178 "nvme_io_md": false, 00:08:13.178 "write_zeroes": true, 00:08:13.178 "zcopy": false, 00:08:13.178 "get_zone_info": false, 00:08:13.178 "zone_management": false, 00:08:13.178 "zone_append": false, 00:08:13.178 "compare": false, 00:08:13.178 "compare_and_write": false, 00:08:13.178 "abort": false, 00:08:13.178 "seek_hole": false, 00:08:13.178 "seek_data": false, 00:08:13.178 "copy": false, 00:08:13.178 "nvme_iov_md": false 00:08:13.178 }, 00:08:13.178 "memory_domains": [ 00:08:13.178 { 00:08:13.178 "dma_device_id": "system", 00:08:13.178 "dma_device_type": 1 00:08:13.178 }, 00:08:13.178 { 00:08:13.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.178 "dma_device_type": 2 00:08:13.178 }, 00:08:13.178 { 00:08:13.178 "dma_device_id": "system", 00:08:13.178 "dma_device_type": 1 00:08:13.178 }, 00:08:13.178 { 00:08:13.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.178 "dma_device_type": 2 00:08:13.178 } 00:08:13.178 ], 00:08:13.178 "driver_specific": { 00:08:13.178 "raid": { 00:08:13.178 "uuid": "d4fb6f02-d470-4a18-a4e6-a49eb05a11e5", 00:08:13.178 "strip_size_kb": 64, 00:08:13.178 "state": "online", 00:08:13.178 "raid_level": "concat", 00:08:13.178 "superblock": false, 00:08:13.178 "num_base_bdevs": 2, 00:08:13.178 "num_base_bdevs_discovered": 2, 00:08:13.178 "num_base_bdevs_operational": 2, 00:08:13.178 "base_bdevs_list": [ 00:08:13.178 { 00:08:13.178 "name": "BaseBdev1", 00:08:13.178 "uuid": "7491a387-b2fa-4549-9a64-332042d13c9c", 00:08:13.178 "is_configured": true, 00:08:13.178 "data_offset": 0, 00:08:13.178 "data_size": 65536 00:08:13.178 }, 00:08:13.178 { 00:08:13.178 "name": "BaseBdev2", 00:08:13.178 "uuid": "20e99536-8d99-4a73-974d-490466802e09", 00:08:13.178 "is_configured": true, 00:08:13.178 "data_offset": 0, 00:08:13.178 "data_size": 65536 00:08:13.178 } 00:08:13.178 ] 00:08:13.178 } 00:08:13.178 } 00:08:13.178 }' 00:08:13.178 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.178 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:13.178 BaseBdev2' 00:08:13.178 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.178 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.178 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.178 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:13.178 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.178 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.178 07:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.178 07:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.178 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.178 [2024-11-29 07:40:03.067537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:13.178 [2024-11-29 07:40:03.067569] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.178 [2024-11-29 07:40:03.067617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.442 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.442 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:13.442 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:13.442 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.442 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:13.442 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:13.442 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:13.442 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.442 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:13.442 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.442 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.443 "name": "Existed_Raid", 00:08:13.443 "uuid": "d4fb6f02-d470-4a18-a4e6-a49eb05a11e5", 00:08:13.443 "strip_size_kb": 64, 00:08:13.443 "state": "offline", 00:08:13.443 "raid_level": "concat", 00:08:13.443 "superblock": false, 00:08:13.443 "num_base_bdevs": 2, 00:08:13.443 "num_base_bdevs_discovered": 1, 00:08:13.443 "num_base_bdevs_operational": 1, 00:08:13.443 "base_bdevs_list": [ 00:08:13.443 { 00:08:13.443 "name": null, 00:08:13.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.443 "is_configured": false, 00:08:13.443 "data_offset": 0, 00:08:13.443 "data_size": 65536 00:08:13.443 }, 00:08:13.443 { 00:08:13.443 "name": "BaseBdev2", 00:08:13.443 "uuid": "20e99536-8d99-4a73-974d-490466802e09", 00:08:13.443 "is_configured": true, 00:08:13.443 "data_offset": 0, 00:08:13.443 "data_size": 65536 00:08:13.443 } 00:08:13.443 ] 00:08:13.443 }' 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.443 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.705 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:13.705 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.705 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.705 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.705 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.705 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.705 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.705 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.705 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.705 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:13.705 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.705 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.705 [2024-11-29 07:40:03.645052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.705 [2024-11-29 07:40:03.645156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61545 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61545 ']' 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61545 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61545 00:08:13.963 killing process with pid 61545 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61545' 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61545 00:08:13.963 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61545 00:08:13.963 [2024-11-29 07:40:03.814502] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.963 [2024-11-29 07:40:03.830888] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:15.343 00:08:15.343 real 0m4.993s 00:08:15.343 user 0m7.279s 00:08:15.343 sys 0m0.754s 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.343 ************************************ 00:08:15.343 END TEST raid_state_function_test 00:08:15.343 ************************************ 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.343 07:40:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:15.343 07:40:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:15.343 07:40:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.343 07:40:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.343 ************************************ 00:08:15.343 START TEST raid_state_function_test_sb 00:08:15.343 ************************************ 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61798 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61798' 00:08:15.343 Process raid pid: 61798 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61798 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61798 ']' 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.343 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.343 [2024-11-29 07:40:05.073500] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:15.343 [2024-11-29 07:40:05.073756] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.343 [2024-11-29 07:40:05.247727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.602 [2024-11-29 07:40:05.352577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.861 [2024-11-29 07:40:05.550001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.861 [2024-11-29 07:40:05.550147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.120 [2024-11-29 07:40:05.893933] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.120 [2024-11-29 07:40:05.894049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.120 [2024-11-29 07:40:05.894080] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.120 [2024-11-29 07:40:05.894103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.120 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.120 "name": "Existed_Raid", 00:08:16.121 "uuid": "5a7fea7c-95b6-4f1a-9110-b1ed71044d18", 00:08:16.121 "strip_size_kb": 64, 00:08:16.121 "state": "configuring", 00:08:16.121 "raid_level": "concat", 00:08:16.121 "superblock": true, 00:08:16.121 "num_base_bdevs": 2, 00:08:16.121 "num_base_bdevs_discovered": 0, 00:08:16.121 "num_base_bdevs_operational": 2, 00:08:16.121 "base_bdevs_list": [ 00:08:16.121 { 00:08:16.121 "name": "BaseBdev1", 00:08:16.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.121 "is_configured": false, 00:08:16.121 "data_offset": 0, 00:08:16.121 "data_size": 0 00:08:16.121 }, 00:08:16.121 { 00:08:16.121 "name": "BaseBdev2", 00:08:16.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.121 "is_configured": false, 00:08:16.121 "data_offset": 0, 00:08:16.121 "data_size": 0 00:08:16.121 } 00:08:16.121 ] 00:08:16.121 }' 00:08:16.121 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.121 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.381 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.381 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.381 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.381 [2024-11-29 07:40:06.305155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.381 [2024-11-29 07:40:06.305228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:16.381 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.381 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.381 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.381 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.381 [2024-11-29 07:40:06.317139] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.381 [2024-11-29 07:40:06.317211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.381 [2024-11-29 07:40:06.317241] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.381 [2024-11-29 07:40:06.317282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.381 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.381 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:16.381 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.381 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.641 [2024-11-29 07:40:06.362659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.641 BaseBdev1 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.641 [ 00:08:16.641 { 00:08:16.641 "name": "BaseBdev1", 00:08:16.641 "aliases": [ 00:08:16.641 "d81cbd2f-009e-4699-8a47-e1cdbd4b7a49" 00:08:16.641 ], 00:08:16.641 "product_name": "Malloc disk", 00:08:16.641 "block_size": 512, 00:08:16.641 "num_blocks": 65536, 00:08:16.641 "uuid": "d81cbd2f-009e-4699-8a47-e1cdbd4b7a49", 00:08:16.641 "assigned_rate_limits": { 00:08:16.641 "rw_ios_per_sec": 0, 00:08:16.641 "rw_mbytes_per_sec": 0, 00:08:16.641 "r_mbytes_per_sec": 0, 00:08:16.641 "w_mbytes_per_sec": 0 00:08:16.641 }, 00:08:16.641 "claimed": true, 00:08:16.641 "claim_type": "exclusive_write", 00:08:16.641 "zoned": false, 00:08:16.641 "supported_io_types": { 00:08:16.641 "read": true, 00:08:16.641 "write": true, 00:08:16.641 "unmap": true, 00:08:16.641 "flush": true, 00:08:16.641 "reset": true, 00:08:16.641 "nvme_admin": false, 00:08:16.641 "nvme_io": false, 00:08:16.641 "nvme_io_md": false, 00:08:16.641 "write_zeroes": true, 00:08:16.641 "zcopy": true, 00:08:16.641 "get_zone_info": false, 00:08:16.641 "zone_management": false, 00:08:16.641 "zone_append": false, 00:08:16.641 "compare": false, 00:08:16.641 "compare_and_write": false, 00:08:16.641 "abort": true, 00:08:16.641 "seek_hole": false, 00:08:16.641 "seek_data": false, 00:08:16.641 "copy": true, 00:08:16.641 "nvme_iov_md": false 00:08:16.641 }, 00:08:16.641 "memory_domains": [ 00:08:16.641 { 00:08:16.641 "dma_device_id": "system", 00:08:16.641 "dma_device_type": 1 00:08:16.641 }, 00:08:16.641 { 00:08:16.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.641 "dma_device_type": 2 00:08:16.641 } 00:08:16.641 ], 00:08:16.641 "driver_specific": {} 00:08:16.641 } 00:08:16.641 ] 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.641 "name": "Existed_Raid", 00:08:16.641 "uuid": "bc00691a-c717-4e54-9514-f26f591a9253", 00:08:16.641 "strip_size_kb": 64, 00:08:16.641 "state": "configuring", 00:08:16.641 "raid_level": "concat", 00:08:16.641 "superblock": true, 00:08:16.641 "num_base_bdevs": 2, 00:08:16.641 "num_base_bdevs_discovered": 1, 00:08:16.641 "num_base_bdevs_operational": 2, 00:08:16.641 "base_bdevs_list": [ 00:08:16.641 { 00:08:16.641 "name": "BaseBdev1", 00:08:16.641 "uuid": "d81cbd2f-009e-4699-8a47-e1cdbd4b7a49", 00:08:16.641 "is_configured": true, 00:08:16.641 "data_offset": 2048, 00:08:16.641 "data_size": 63488 00:08:16.641 }, 00:08:16.641 { 00:08:16.641 "name": "BaseBdev2", 00:08:16.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.641 "is_configured": false, 00:08:16.641 "data_offset": 0, 00:08:16.641 "data_size": 0 00:08:16.641 } 00:08:16.641 ] 00:08:16.641 }' 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.641 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.901 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.901 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.901 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.901 [2024-11-29 07:40:06.833875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.901 [2024-11-29 07:40:06.833956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:16.901 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.901 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.901 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.901 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.161 [2024-11-29 07:40:06.845904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.161 [2024-11-29 07:40:06.847800] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.161 [2024-11-29 07:40:06.847889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.161 "name": "Existed_Raid", 00:08:17.161 "uuid": "b8fe0572-ebe8-4ee0-a893-7f45eeab6003", 00:08:17.161 "strip_size_kb": 64, 00:08:17.161 "state": "configuring", 00:08:17.161 "raid_level": "concat", 00:08:17.161 "superblock": true, 00:08:17.161 "num_base_bdevs": 2, 00:08:17.161 "num_base_bdevs_discovered": 1, 00:08:17.161 "num_base_bdevs_operational": 2, 00:08:17.161 "base_bdevs_list": [ 00:08:17.161 { 00:08:17.161 "name": "BaseBdev1", 00:08:17.161 "uuid": "d81cbd2f-009e-4699-8a47-e1cdbd4b7a49", 00:08:17.161 "is_configured": true, 00:08:17.161 "data_offset": 2048, 00:08:17.161 "data_size": 63488 00:08:17.161 }, 00:08:17.161 { 00:08:17.161 "name": "BaseBdev2", 00:08:17.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.161 "is_configured": false, 00:08:17.161 "data_offset": 0, 00:08:17.161 "data_size": 0 00:08:17.161 } 00:08:17.161 ] 00:08:17.161 }' 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.161 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.421 [2024-11-29 07:40:07.330349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.421 [2024-11-29 07:40:07.330710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.421 [2024-11-29 07:40:07.330764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:17.421 [2024-11-29 07:40:07.331035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:17.421 BaseBdev2 00:08:17.421 [2024-11-29 07:40:07.331238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.421 [2024-11-29 07:40:07.331255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:17.421 [2024-11-29 07:40:07.331415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.421 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.421 [ 00:08:17.421 { 00:08:17.421 "name": "BaseBdev2", 00:08:17.421 "aliases": [ 00:08:17.421 "880e92e9-71e2-4f04-9c07-d3242512c45b" 00:08:17.421 ], 00:08:17.421 "product_name": "Malloc disk", 00:08:17.421 "block_size": 512, 00:08:17.421 "num_blocks": 65536, 00:08:17.421 "uuid": "880e92e9-71e2-4f04-9c07-d3242512c45b", 00:08:17.421 "assigned_rate_limits": { 00:08:17.421 "rw_ios_per_sec": 0, 00:08:17.421 "rw_mbytes_per_sec": 0, 00:08:17.421 "r_mbytes_per_sec": 0, 00:08:17.421 "w_mbytes_per_sec": 0 00:08:17.421 }, 00:08:17.421 "claimed": true, 00:08:17.421 "claim_type": "exclusive_write", 00:08:17.421 "zoned": false, 00:08:17.421 "supported_io_types": { 00:08:17.421 "read": true, 00:08:17.421 "write": true, 00:08:17.421 "unmap": true, 00:08:17.421 "flush": true, 00:08:17.421 "reset": true, 00:08:17.421 "nvme_admin": false, 00:08:17.421 "nvme_io": false, 00:08:17.421 "nvme_io_md": false, 00:08:17.421 "write_zeroes": true, 00:08:17.421 "zcopy": true, 00:08:17.681 "get_zone_info": false, 00:08:17.681 "zone_management": false, 00:08:17.681 "zone_append": false, 00:08:17.681 "compare": false, 00:08:17.681 "compare_and_write": false, 00:08:17.681 "abort": true, 00:08:17.681 "seek_hole": false, 00:08:17.681 "seek_data": false, 00:08:17.681 "copy": true, 00:08:17.681 "nvme_iov_md": false 00:08:17.681 }, 00:08:17.681 "memory_domains": [ 00:08:17.681 { 00:08:17.681 "dma_device_id": "system", 00:08:17.681 "dma_device_type": 1 00:08:17.681 }, 00:08:17.681 { 00:08:17.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.681 "dma_device_type": 2 00:08:17.681 } 00:08:17.681 ], 00:08:17.681 "driver_specific": {} 00:08:17.681 } 00:08:17.681 ] 00:08:17.681 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.681 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:17.681 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.681 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.681 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.682 "name": "Existed_Raid", 00:08:17.682 "uuid": "b8fe0572-ebe8-4ee0-a893-7f45eeab6003", 00:08:17.682 "strip_size_kb": 64, 00:08:17.682 "state": "online", 00:08:17.682 "raid_level": "concat", 00:08:17.682 "superblock": true, 00:08:17.682 "num_base_bdevs": 2, 00:08:17.682 "num_base_bdevs_discovered": 2, 00:08:17.682 "num_base_bdevs_operational": 2, 00:08:17.682 "base_bdevs_list": [ 00:08:17.682 { 00:08:17.682 "name": "BaseBdev1", 00:08:17.682 "uuid": "d81cbd2f-009e-4699-8a47-e1cdbd4b7a49", 00:08:17.682 "is_configured": true, 00:08:17.682 "data_offset": 2048, 00:08:17.682 "data_size": 63488 00:08:17.682 }, 00:08:17.682 { 00:08:17.682 "name": "BaseBdev2", 00:08:17.682 "uuid": "880e92e9-71e2-4f04-9c07-d3242512c45b", 00:08:17.682 "is_configured": true, 00:08:17.682 "data_offset": 2048, 00:08:17.682 "data_size": 63488 00:08:17.682 } 00:08:17.682 ] 00:08:17.682 }' 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.682 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.941 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:17.941 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:17.941 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.941 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.941 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.941 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.941 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:17.941 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.941 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.941 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.941 [2024-11-29 07:40:07.773863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.941 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.941 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.941 "name": "Existed_Raid", 00:08:17.941 "aliases": [ 00:08:17.941 "b8fe0572-ebe8-4ee0-a893-7f45eeab6003" 00:08:17.941 ], 00:08:17.941 "product_name": "Raid Volume", 00:08:17.941 "block_size": 512, 00:08:17.941 "num_blocks": 126976, 00:08:17.941 "uuid": "b8fe0572-ebe8-4ee0-a893-7f45eeab6003", 00:08:17.941 "assigned_rate_limits": { 00:08:17.941 "rw_ios_per_sec": 0, 00:08:17.941 "rw_mbytes_per_sec": 0, 00:08:17.941 "r_mbytes_per_sec": 0, 00:08:17.941 "w_mbytes_per_sec": 0 00:08:17.941 }, 00:08:17.941 "claimed": false, 00:08:17.941 "zoned": false, 00:08:17.941 "supported_io_types": { 00:08:17.941 "read": true, 00:08:17.941 "write": true, 00:08:17.941 "unmap": true, 00:08:17.941 "flush": true, 00:08:17.941 "reset": true, 00:08:17.941 "nvme_admin": false, 00:08:17.942 "nvme_io": false, 00:08:17.942 "nvme_io_md": false, 00:08:17.942 "write_zeroes": true, 00:08:17.942 "zcopy": false, 00:08:17.942 "get_zone_info": false, 00:08:17.942 "zone_management": false, 00:08:17.942 "zone_append": false, 00:08:17.942 "compare": false, 00:08:17.942 "compare_and_write": false, 00:08:17.942 "abort": false, 00:08:17.942 "seek_hole": false, 00:08:17.942 "seek_data": false, 00:08:17.942 "copy": false, 00:08:17.942 "nvme_iov_md": false 00:08:17.942 }, 00:08:17.942 "memory_domains": [ 00:08:17.942 { 00:08:17.942 "dma_device_id": "system", 00:08:17.942 "dma_device_type": 1 00:08:17.942 }, 00:08:17.942 { 00:08:17.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.942 "dma_device_type": 2 00:08:17.942 }, 00:08:17.942 { 00:08:17.942 "dma_device_id": "system", 00:08:17.942 "dma_device_type": 1 00:08:17.942 }, 00:08:17.942 { 00:08:17.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.942 "dma_device_type": 2 00:08:17.942 } 00:08:17.942 ], 00:08:17.942 "driver_specific": { 00:08:17.942 "raid": { 00:08:17.942 "uuid": "b8fe0572-ebe8-4ee0-a893-7f45eeab6003", 00:08:17.942 "strip_size_kb": 64, 00:08:17.942 "state": "online", 00:08:17.942 "raid_level": "concat", 00:08:17.942 "superblock": true, 00:08:17.942 "num_base_bdevs": 2, 00:08:17.942 "num_base_bdevs_discovered": 2, 00:08:17.942 "num_base_bdevs_operational": 2, 00:08:17.942 "base_bdevs_list": [ 00:08:17.942 { 00:08:17.942 "name": "BaseBdev1", 00:08:17.942 "uuid": "d81cbd2f-009e-4699-8a47-e1cdbd4b7a49", 00:08:17.942 "is_configured": true, 00:08:17.942 "data_offset": 2048, 00:08:17.942 "data_size": 63488 00:08:17.942 }, 00:08:17.942 { 00:08:17.942 "name": "BaseBdev2", 00:08:17.942 "uuid": "880e92e9-71e2-4f04-9c07-d3242512c45b", 00:08:17.942 "is_configured": true, 00:08:17.942 "data_offset": 2048, 00:08:17.942 "data_size": 63488 00:08:17.942 } 00:08:17.942 ] 00:08:17.942 } 00:08:17.942 } 00:08:17.942 }' 00:08:17.942 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.942 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:17.942 BaseBdev2' 00:08:17.942 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:18.202 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.202 [2024-11-29 07:40:08.005268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:18.202 [2024-11-29 07:40:08.005337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.202 [2024-11-29 07:40:08.005404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.202 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.462 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.462 "name": "Existed_Raid", 00:08:18.462 "uuid": "b8fe0572-ebe8-4ee0-a893-7f45eeab6003", 00:08:18.462 "strip_size_kb": 64, 00:08:18.462 "state": "offline", 00:08:18.462 "raid_level": "concat", 00:08:18.462 "superblock": true, 00:08:18.462 "num_base_bdevs": 2, 00:08:18.462 "num_base_bdevs_discovered": 1, 00:08:18.462 "num_base_bdevs_operational": 1, 00:08:18.462 "base_bdevs_list": [ 00:08:18.462 { 00:08:18.462 "name": null, 00:08:18.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.462 "is_configured": false, 00:08:18.462 "data_offset": 0, 00:08:18.462 "data_size": 63488 00:08:18.462 }, 00:08:18.462 { 00:08:18.462 "name": "BaseBdev2", 00:08:18.462 "uuid": "880e92e9-71e2-4f04-9c07-d3242512c45b", 00:08:18.462 "is_configured": true, 00:08:18.462 "data_offset": 2048, 00:08:18.462 "data_size": 63488 00:08:18.462 } 00:08:18.462 ] 00:08:18.462 }' 00:08:18.462 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.462 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.722 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:18.722 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.722 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.722 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.722 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.722 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:18.722 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.722 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:18.722 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.722 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:18.722 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.722 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.722 [2024-11-29 07:40:08.581808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.722 [2024-11-29 07:40:08.581915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61798 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61798 ']' 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61798 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61798 00:08:18.982 killing process with pid 61798 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61798' 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61798 00:08:18.982 [2024-11-29 07:40:08.773055] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.982 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61798 00:08:18.982 [2024-11-29 07:40:08.789624] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.920 07:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:19.920 00:08:19.920 real 0m4.877s 00:08:19.920 user 0m7.069s 00:08:19.920 sys 0m0.743s 00:08:19.920 07:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.920 07:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.920 ************************************ 00:08:19.920 END TEST raid_state_function_test_sb 00:08:19.920 ************************************ 00:08:20.180 07:40:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:20.180 07:40:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:20.180 07:40:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.180 07:40:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.180 ************************************ 00:08:20.180 START TEST raid_superblock_test 00:08:20.180 ************************************ 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:20.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62039 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62039 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62039 ']' 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.180 07:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.180 [2024-11-29 07:40:10.009684] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:20.180 [2024-11-29 07:40:10.009897] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62039 ] 00:08:20.440 [2024-11-29 07:40:10.181042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.440 [2024-11-29 07:40:10.292470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.700 [2024-11-29 07:40:10.484554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.700 [2024-11-29 07:40:10.484665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.960 malloc1 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.960 [2024-11-29 07:40:10.880590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:20.960 [2024-11-29 07:40:10.880704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.960 [2024-11-29 07:40:10.880758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:20.960 [2024-11-29 07:40:10.880797] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.960 [2024-11-29 07:40:10.882839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.960 [2024-11-29 07:40:10.882908] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:20.960 pt1 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:20.960 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.961 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.961 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.961 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:20.961 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.961 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.221 malloc2 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.221 [2024-11-29 07:40:10.933981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:21.221 [2024-11-29 07:40:10.934072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.221 [2024-11-29 07:40:10.934141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:21.221 [2024-11-29 07:40:10.934173] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.221 [2024-11-29 07:40:10.936155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.221 [2024-11-29 07:40:10.936219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:21.221 pt2 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.221 [2024-11-29 07:40:10.946015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:21.221 [2024-11-29 07:40:10.947780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.221 [2024-11-29 07:40:10.947991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:21.221 [2024-11-29 07:40:10.948041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:21.221 [2024-11-29 07:40:10.948297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:21.221 [2024-11-29 07:40:10.948471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:21.221 [2024-11-29 07:40:10.948512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:21.221 [2024-11-29 07:40:10.948706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.221 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.221 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.221 "name": "raid_bdev1", 00:08:21.221 "uuid": "9d1178b0-521b-40c7-b058-cfffd4bc2f3e", 00:08:21.221 "strip_size_kb": 64, 00:08:21.221 "state": "online", 00:08:21.221 "raid_level": "concat", 00:08:21.221 "superblock": true, 00:08:21.221 "num_base_bdevs": 2, 00:08:21.221 "num_base_bdevs_discovered": 2, 00:08:21.221 "num_base_bdevs_operational": 2, 00:08:21.221 "base_bdevs_list": [ 00:08:21.221 { 00:08:21.221 "name": "pt1", 00:08:21.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.221 "is_configured": true, 00:08:21.221 "data_offset": 2048, 00:08:21.221 "data_size": 63488 00:08:21.221 }, 00:08:21.221 { 00:08:21.221 "name": "pt2", 00:08:21.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.221 "is_configured": true, 00:08:21.221 "data_offset": 2048, 00:08:21.221 "data_size": 63488 00:08:21.221 } 00:08:21.221 ] 00:08:21.221 }' 00:08:21.221 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.221 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.490 [2024-11-29 07:40:11.381522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.490 "name": "raid_bdev1", 00:08:21.490 "aliases": [ 00:08:21.490 "9d1178b0-521b-40c7-b058-cfffd4bc2f3e" 00:08:21.490 ], 00:08:21.490 "product_name": "Raid Volume", 00:08:21.490 "block_size": 512, 00:08:21.490 "num_blocks": 126976, 00:08:21.490 "uuid": "9d1178b0-521b-40c7-b058-cfffd4bc2f3e", 00:08:21.490 "assigned_rate_limits": { 00:08:21.490 "rw_ios_per_sec": 0, 00:08:21.490 "rw_mbytes_per_sec": 0, 00:08:21.490 "r_mbytes_per_sec": 0, 00:08:21.490 "w_mbytes_per_sec": 0 00:08:21.490 }, 00:08:21.490 "claimed": false, 00:08:21.490 "zoned": false, 00:08:21.490 "supported_io_types": { 00:08:21.490 "read": true, 00:08:21.490 "write": true, 00:08:21.490 "unmap": true, 00:08:21.490 "flush": true, 00:08:21.490 "reset": true, 00:08:21.490 "nvme_admin": false, 00:08:21.490 "nvme_io": false, 00:08:21.490 "nvme_io_md": false, 00:08:21.490 "write_zeroes": true, 00:08:21.490 "zcopy": false, 00:08:21.490 "get_zone_info": false, 00:08:21.490 "zone_management": false, 00:08:21.490 "zone_append": false, 00:08:21.490 "compare": false, 00:08:21.490 "compare_and_write": false, 00:08:21.490 "abort": false, 00:08:21.490 "seek_hole": false, 00:08:21.490 "seek_data": false, 00:08:21.490 "copy": false, 00:08:21.490 "nvme_iov_md": false 00:08:21.490 }, 00:08:21.490 "memory_domains": [ 00:08:21.490 { 00:08:21.490 "dma_device_id": "system", 00:08:21.490 "dma_device_type": 1 00:08:21.490 }, 00:08:21.490 { 00:08:21.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.490 "dma_device_type": 2 00:08:21.490 }, 00:08:21.490 { 00:08:21.490 "dma_device_id": "system", 00:08:21.490 "dma_device_type": 1 00:08:21.490 }, 00:08:21.490 { 00:08:21.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.490 "dma_device_type": 2 00:08:21.490 } 00:08:21.490 ], 00:08:21.490 "driver_specific": { 00:08:21.490 "raid": { 00:08:21.490 "uuid": "9d1178b0-521b-40c7-b058-cfffd4bc2f3e", 00:08:21.490 "strip_size_kb": 64, 00:08:21.490 "state": "online", 00:08:21.490 "raid_level": "concat", 00:08:21.490 "superblock": true, 00:08:21.490 "num_base_bdevs": 2, 00:08:21.490 "num_base_bdevs_discovered": 2, 00:08:21.490 "num_base_bdevs_operational": 2, 00:08:21.490 "base_bdevs_list": [ 00:08:21.490 { 00:08:21.490 "name": "pt1", 00:08:21.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.490 "is_configured": true, 00:08:21.490 "data_offset": 2048, 00:08:21.490 "data_size": 63488 00:08:21.490 }, 00:08:21.490 { 00:08:21.490 "name": "pt2", 00:08:21.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.490 "is_configured": true, 00:08:21.490 "data_offset": 2048, 00:08:21.490 "data_size": 63488 00:08:21.490 } 00:08:21.490 ] 00:08:21.490 } 00:08:21.490 } 00:08:21.490 }' 00:08:21.490 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:21.764 pt2' 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:21.764 [2024-11-29 07:40:11.609075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9d1178b0-521b-40c7-b058-cfffd4bc2f3e 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9d1178b0-521b-40c7-b058-cfffd4bc2f3e ']' 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.764 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.764 [2024-11-29 07:40:11.656708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.764 [2024-11-29 07:40:11.656730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.764 [2024-11-29 07:40:11.656802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.764 [2024-11-29 07:40:11.656852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.764 [2024-11-29 07:40:11.656863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:21.765 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.765 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.765 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.765 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:21.765 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.765 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.024 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.024 [2024-11-29 07:40:11.788554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:22.024 [2024-11-29 07:40:11.790479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:22.024 [2024-11-29 07:40:11.790588] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:22.024 [2024-11-29 07:40:11.790697] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:22.024 [2024-11-29 07:40:11.790779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.024 [2024-11-29 07:40:11.790813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:22.024 request: 00:08:22.024 { 00:08:22.024 "name": "raid_bdev1", 00:08:22.024 "raid_level": "concat", 00:08:22.024 "base_bdevs": [ 00:08:22.024 "malloc1", 00:08:22.024 "malloc2" 00:08:22.024 ], 00:08:22.024 "strip_size_kb": 64, 00:08:22.024 "superblock": false, 00:08:22.024 "method": "bdev_raid_create", 00:08:22.025 "req_id": 1 00:08:22.025 } 00:08:22.025 Got JSON-RPC error response 00:08:22.025 response: 00:08:22.025 { 00:08:22.025 "code": -17, 00:08:22.025 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:22.025 } 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.025 [2024-11-29 07:40:11.852452] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:22.025 [2024-11-29 07:40:11.852569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.025 [2024-11-29 07:40:11.852604] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:22.025 [2024-11-29 07:40:11.852645] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.025 [2024-11-29 07:40:11.854833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.025 [2024-11-29 07:40:11.854904] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:22.025 [2024-11-29 07:40:11.855042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:22.025 [2024-11-29 07:40:11.855145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:22.025 pt1 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.025 "name": "raid_bdev1", 00:08:22.025 "uuid": "9d1178b0-521b-40c7-b058-cfffd4bc2f3e", 00:08:22.025 "strip_size_kb": 64, 00:08:22.025 "state": "configuring", 00:08:22.025 "raid_level": "concat", 00:08:22.025 "superblock": true, 00:08:22.025 "num_base_bdevs": 2, 00:08:22.025 "num_base_bdevs_discovered": 1, 00:08:22.025 "num_base_bdevs_operational": 2, 00:08:22.025 "base_bdevs_list": [ 00:08:22.025 { 00:08:22.025 "name": "pt1", 00:08:22.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.025 "is_configured": true, 00:08:22.025 "data_offset": 2048, 00:08:22.025 "data_size": 63488 00:08:22.025 }, 00:08:22.025 { 00:08:22.025 "name": null, 00:08:22.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.025 "is_configured": false, 00:08:22.025 "data_offset": 2048, 00:08:22.025 "data_size": 63488 00:08:22.025 } 00:08:22.025 ] 00:08:22.025 }' 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.025 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.594 [2024-11-29 07:40:12.315672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:22.594 [2024-11-29 07:40:12.315815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.594 [2024-11-29 07:40:12.315855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:22.594 [2024-11-29 07:40:12.315886] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.594 [2024-11-29 07:40:12.316398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.594 [2024-11-29 07:40:12.316464] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:22.594 [2024-11-29 07:40:12.316584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:22.594 [2024-11-29 07:40:12.316642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:22.594 [2024-11-29 07:40:12.316792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.594 [2024-11-29 07:40:12.316833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:22.594 [2024-11-29 07:40:12.317110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:22.594 [2024-11-29 07:40:12.317305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.594 [2024-11-29 07:40:12.317343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:22.594 [2024-11-29 07:40:12.317525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.594 pt2 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.594 "name": "raid_bdev1", 00:08:22.594 "uuid": "9d1178b0-521b-40c7-b058-cfffd4bc2f3e", 00:08:22.594 "strip_size_kb": 64, 00:08:22.594 "state": "online", 00:08:22.594 "raid_level": "concat", 00:08:22.594 "superblock": true, 00:08:22.594 "num_base_bdevs": 2, 00:08:22.594 "num_base_bdevs_discovered": 2, 00:08:22.594 "num_base_bdevs_operational": 2, 00:08:22.594 "base_bdevs_list": [ 00:08:22.594 { 00:08:22.594 "name": "pt1", 00:08:22.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.594 "is_configured": true, 00:08:22.594 "data_offset": 2048, 00:08:22.594 "data_size": 63488 00:08:22.594 }, 00:08:22.594 { 00:08:22.594 "name": "pt2", 00:08:22.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.594 "is_configured": true, 00:08:22.594 "data_offset": 2048, 00:08:22.594 "data_size": 63488 00:08:22.594 } 00:08:22.594 ] 00:08:22.594 }' 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.594 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.854 [2024-11-29 07:40:12.735209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.854 "name": "raid_bdev1", 00:08:22.854 "aliases": [ 00:08:22.854 "9d1178b0-521b-40c7-b058-cfffd4bc2f3e" 00:08:22.854 ], 00:08:22.854 "product_name": "Raid Volume", 00:08:22.854 "block_size": 512, 00:08:22.854 "num_blocks": 126976, 00:08:22.854 "uuid": "9d1178b0-521b-40c7-b058-cfffd4bc2f3e", 00:08:22.854 "assigned_rate_limits": { 00:08:22.854 "rw_ios_per_sec": 0, 00:08:22.854 "rw_mbytes_per_sec": 0, 00:08:22.854 "r_mbytes_per_sec": 0, 00:08:22.854 "w_mbytes_per_sec": 0 00:08:22.854 }, 00:08:22.854 "claimed": false, 00:08:22.854 "zoned": false, 00:08:22.854 "supported_io_types": { 00:08:22.854 "read": true, 00:08:22.854 "write": true, 00:08:22.854 "unmap": true, 00:08:22.854 "flush": true, 00:08:22.854 "reset": true, 00:08:22.854 "nvme_admin": false, 00:08:22.854 "nvme_io": false, 00:08:22.854 "nvme_io_md": false, 00:08:22.854 "write_zeroes": true, 00:08:22.854 "zcopy": false, 00:08:22.854 "get_zone_info": false, 00:08:22.854 "zone_management": false, 00:08:22.854 "zone_append": false, 00:08:22.854 "compare": false, 00:08:22.854 "compare_and_write": false, 00:08:22.854 "abort": false, 00:08:22.854 "seek_hole": false, 00:08:22.854 "seek_data": false, 00:08:22.854 "copy": false, 00:08:22.854 "nvme_iov_md": false 00:08:22.854 }, 00:08:22.854 "memory_domains": [ 00:08:22.854 { 00:08:22.854 "dma_device_id": "system", 00:08:22.854 "dma_device_type": 1 00:08:22.854 }, 00:08:22.854 { 00:08:22.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.854 "dma_device_type": 2 00:08:22.854 }, 00:08:22.854 { 00:08:22.854 "dma_device_id": "system", 00:08:22.854 "dma_device_type": 1 00:08:22.854 }, 00:08:22.854 { 00:08:22.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.854 "dma_device_type": 2 00:08:22.854 } 00:08:22.854 ], 00:08:22.854 "driver_specific": { 00:08:22.854 "raid": { 00:08:22.854 "uuid": "9d1178b0-521b-40c7-b058-cfffd4bc2f3e", 00:08:22.854 "strip_size_kb": 64, 00:08:22.854 "state": "online", 00:08:22.854 "raid_level": "concat", 00:08:22.854 "superblock": true, 00:08:22.854 "num_base_bdevs": 2, 00:08:22.854 "num_base_bdevs_discovered": 2, 00:08:22.854 "num_base_bdevs_operational": 2, 00:08:22.854 "base_bdevs_list": [ 00:08:22.854 { 00:08:22.854 "name": "pt1", 00:08:22.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.854 "is_configured": true, 00:08:22.854 "data_offset": 2048, 00:08:22.854 "data_size": 63488 00:08:22.854 }, 00:08:22.854 { 00:08:22.854 "name": "pt2", 00:08:22.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.854 "is_configured": true, 00:08:22.854 "data_offset": 2048, 00:08:22.854 "data_size": 63488 00:08:22.854 } 00:08:22.854 ] 00:08:22.854 } 00:08:22.854 } 00:08:22.854 }' 00:08:22.854 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:23.115 pt2' 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.115 [2024-11-29 07:40:12.942778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9d1178b0-521b-40c7-b058-cfffd4bc2f3e '!=' 9d1178b0-521b-40c7-b058-cfffd4bc2f3e ']' 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62039 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62039 ']' 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62039 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.115 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62039 00:08:23.115 07:40:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.115 07:40:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.115 07:40:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62039' 00:08:23.115 killing process with pid 62039 00:08:23.115 07:40:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62039 00:08:23.115 [2024-11-29 07:40:13.026619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:23.115 [2024-11-29 07:40:13.026766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.115 07:40:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62039 00:08:23.115 [2024-11-29 07:40:13.026853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.115 [2024-11-29 07:40:13.026868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:23.375 [2024-11-29 07:40:13.227606] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.759 07:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:24.759 00:08:24.759 real 0m4.384s 00:08:24.759 user 0m6.141s 00:08:24.759 sys 0m0.689s 00:08:24.759 07:40:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.759 07:40:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.759 ************************************ 00:08:24.759 END TEST raid_superblock_test 00:08:24.759 ************************************ 00:08:24.759 07:40:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:24.759 07:40:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:24.759 07:40:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.759 07:40:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.759 ************************************ 00:08:24.759 START TEST raid_read_error_test 00:08:24.759 ************************************ 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QcmSUj4vuj 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62251 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62251 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62251 ']' 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.759 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.759 [2024-11-29 07:40:14.474962] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:24.759 [2024-11-29 07:40:14.475076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62251 ] 00:08:24.759 [2024-11-29 07:40:14.648032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.019 [2024-11-29 07:40:14.757538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.019 [2024-11-29 07:40:14.950146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.019 [2024-11-29 07:40:14.950206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.590 BaseBdev1_malloc 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.590 true 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.590 [2024-11-29 07:40:15.369363] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:25.590 [2024-11-29 07:40:15.369423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.590 [2024-11-29 07:40:15.369445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:25.590 [2024-11-29 07:40:15.369456] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.590 [2024-11-29 07:40:15.371653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.590 [2024-11-29 07:40:15.371772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:25.590 BaseBdev1 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.590 BaseBdev2_malloc 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.590 true 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.590 [2024-11-29 07:40:15.435559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:25.590 [2024-11-29 07:40:15.435612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.590 [2024-11-29 07:40:15.435628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:25.590 [2024-11-29 07:40:15.435638] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.590 [2024-11-29 07:40:15.437620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.590 [2024-11-29 07:40:15.437735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:25.590 BaseBdev2 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.590 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.590 [2024-11-29 07:40:15.447590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.590 [2024-11-29 07:40:15.449386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.590 [2024-11-29 07:40:15.449569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:25.590 [2024-11-29 07:40:15.449584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:25.590 [2024-11-29 07:40:15.449814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:25.590 [2024-11-29 07:40:15.449978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:25.590 [2024-11-29 07:40:15.449989] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:25.590 [2024-11-29 07:40:15.450130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.591 "name": "raid_bdev1", 00:08:25.591 "uuid": "d7a01993-5634-412a-89fb-94c7feb4672a", 00:08:25.591 "strip_size_kb": 64, 00:08:25.591 "state": "online", 00:08:25.591 "raid_level": "concat", 00:08:25.591 "superblock": true, 00:08:25.591 "num_base_bdevs": 2, 00:08:25.591 "num_base_bdevs_discovered": 2, 00:08:25.591 "num_base_bdevs_operational": 2, 00:08:25.591 "base_bdevs_list": [ 00:08:25.591 { 00:08:25.591 "name": "BaseBdev1", 00:08:25.591 "uuid": "a9d39702-7eed-50d0-8b18-13dfbf838158", 00:08:25.591 "is_configured": true, 00:08:25.591 "data_offset": 2048, 00:08:25.591 "data_size": 63488 00:08:25.591 }, 00:08:25.591 { 00:08:25.591 "name": "BaseBdev2", 00:08:25.591 "uuid": "c005170a-27a6-5094-a7d2-208e530ae775", 00:08:25.591 "is_configured": true, 00:08:25.591 "data_offset": 2048, 00:08:25.591 "data_size": 63488 00:08:25.591 } 00:08:25.591 ] 00:08:25.591 }' 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.591 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.159 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:26.159 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:26.159 [2024-11-29 07:40:15.951986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.099 "name": "raid_bdev1", 00:08:27.099 "uuid": "d7a01993-5634-412a-89fb-94c7feb4672a", 00:08:27.099 "strip_size_kb": 64, 00:08:27.099 "state": "online", 00:08:27.099 "raid_level": "concat", 00:08:27.099 "superblock": true, 00:08:27.099 "num_base_bdevs": 2, 00:08:27.099 "num_base_bdevs_discovered": 2, 00:08:27.099 "num_base_bdevs_operational": 2, 00:08:27.099 "base_bdevs_list": [ 00:08:27.099 { 00:08:27.099 "name": "BaseBdev1", 00:08:27.099 "uuid": "a9d39702-7eed-50d0-8b18-13dfbf838158", 00:08:27.099 "is_configured": true, 00:08:27.099 "data_offset": 2048, 00:08:27.099 "data_size": 63488 00:08:27.099 }, 00:08:27.099 { 00:08:27.099 "name": "BaseBdev2", 00:08:27.099 "uuid": "c005170a-27a6-5094-a7d2-208e530ae775", 00:08:27.099 "is_configured": true, 00:08:27.099 "data_offset": 2048, 00:08:27.099 "data_size": 63488 00:08:27.099 } 00:08:27.099 ] 00:08:27.099 }' 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.099 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.359 07:40:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:27.359 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.359 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.359 [2024-11-29 07:40:17.287328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.359 [2024-11-29 07:40:17.287465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.359 [2024-11-29 07:40:17.290220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.359 [2024-11-29 07:40:17.290300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.359 [2024-11-29 07:40:17.290336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.359 [2024-11-29 07:40:17.290351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:27.359 { 00:08:27.359 "results": [ 00:08:27.359 { 00:08:27.359 "job": "raid_bdev1", 00:08:27.359 "core_mask": "0x1", 00:08:27.359 "workload": "randrw", 00:08:27.359 "percentage": 50, 00:08:27.359 "status": "finished", 00:08:27.359 "queue_depth": 1, 00:08:27.359 "io_size": 131072, 00:08:27.359 "runtime": 1.33642, 00:08:27.359 "iops": 16604.80986516215, 00:08:27.359 "mibps": 2075.601233145269, 00:08:27.359 "io_failed": 1, 00:08:27.359 "io_timeout": 0, 00:08:27.359 "avg_latency_us": 83.23071817846943, 00:08:27.359 "min_latency_us": 24.705676855895195, 00:08:27.359 "max_latency_us": 1359.3711790393013 00:08:27.359 } 00:08:27.359 ], 00:08:27.359 "core_count": 1 00:08:27.359 } 00:08:27.360 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.360 07:40:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62251 00:08:27.360 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62251 ']' 00:08:27.360 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62251 00:08:27.360 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:27.360 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.360 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62251 00:08:27.619 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.619 killing process with pid 62251 00:08:27.619 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.619 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62251' 00:08:27.619 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62251 00:08:27.619 [2024-11-29 07:40:17.325374] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.619 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62251 00:08:27.619 [2024-11-29 07:40:17.454125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.000 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QcmSUj4vuj 00:08:29.000 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:29.000 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:29.000 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:29.000 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:29.000 ************************************ 00:08:29.000 END TEST raid_read_error_test 00:08:29.000 ************************************ 00:08:29.000 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:29.000 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:29.000 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:29.000 00:08:29.000 real 0m4.222s 00:08:29.000 user 0m5.055s 00:08:29.000 sys 0m0.490s 00:08:29.000 07:40:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.000 07:40:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.000 07:40:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:29.000 07:40:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:29.000 07:40:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.000 07:40:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.000 ************************************ 00:08:29.000 START TEST raid_write_error_test 00:08:29.000 ************************************ 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MNbIxASwwn 00:08:29.000 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62391 00:08:29.001 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:29.001 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62391 00:08:29.001 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62391 ']' 00:08:29.001 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.001 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.001 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.001 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.001 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.001 [2024-11-29 07:40:18.798372] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:29.001 [2024-11-29 07:40:18.798637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62391 ] 00:08:29.261 [2024-11-29 07:40:18.993484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.261 [2024-11-29 07:40:19.094452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.521 [2024-11-29 07:40:19.288088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.521 [2024-11-29 07:40:19.288152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.780 BaseBdev1_malloc 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.780 true 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.780 [2024-11-29 07:40:19.708458] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:29.780 [2024-11-29 07:40:19.708510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.780 [2024-11-29 07:40:19.708529] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:29.780 [2024-11-29 07:40:19.708540] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.780 [2024-11-29 07:40:19.710594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.780 [2024-11-29 07:40:19.710635] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:29.780 BaseBdev1 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.780 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.041 BaseBdev2_malloc 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.041 true 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.041 [2024-11-29 07:40:19.773980] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:30.041 [2024-11-29 07:40:19.774029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.041 [2024-11-29 07:40:19.774060] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:30.041 [2024-11-29 07:40:19.774070] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.041 [2024-11-29 07:40:19.776038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.041 [2024-11-29 07:40:19.776077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:30.041 BaseBdev2 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.041 [2024-11-29 07:40:19.786025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.041 [2024-11-29 07:40:19.787861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.041 [2024-11-29 07:40:19.788045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:30.041 [2024-11-29 07:40:19.788060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:30.041 [2024-11-29 07:40:19.788292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:30.041 [2024-11-29 07:40:19.788453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:30.041 [2024-11-29 07:40:19.788486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:30.041 [2024-11-29 07:40:19.788646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.041 "name": "raid_bdev1", 00:08:30.041 "uuid": "9a74e31d-36c5-4ede-8073-a7ec820fb39b", 00:08:30.041 "strip_size_kb": 64, 00:08:30.041 "state": "online", 00:08:30.041 "raid_level": "concat", 00:08:30.041 "superblock": true, 00:08:30.041 "num_base_bdevs": 2, 00:08:30.041 "num_base_bdevs_discovered": 2, 00:08:30.041 "num_base_bdevs_operational": 2, 00:08:30.041 "base_bdevs_list": [ 00:08:30.041 { 00:08:30.041 "name": "BaseBdev1", 00:08:30.041 "uuid": "df2fbef0-1d23-5128-9809-fd35deace588", 00:08:30.041 "is_configured": true, 00:08:30.041 "data_offset": 2048, 00:08:30.041 "data_size": 63488 00:08:30.041 }, 00:08:30.041 { 00:08:30.041 "name": "BaseBdev2", 00:08:30.041 "uuid": "78cea63e-99e8-50f5-ae47-2a0107594516", 00:08:30.041 "is_configured": true, 00:08:30.041 "data_offset": 2048, 00:08:30.041 "data_size": 63488 00:08:30.041 } 00:08:30.041 ] 00:08:30.041 }' 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.041 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.300 07:40:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:30.300 07:40:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:30.560 [2024-11-29 07:40:20.302495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.499 "name": "raid_bdev1", 00:08:31.499 "uuid": "9a74e31d-36c5-4ede-8073-a7ec820fb39b", 00:08:31.499 "strip_size_kb": 64, 00:08:31.499 "state": "online", 00:08:31.499 "raid_level": "concat", 00:08:31.499 "superblock": true, 00:08:31.499 "num_base_bdevs": 2, 00:08:31.499 "num_base_bdevs_discovered": 2, 00:08:31.499 "num_base_bdevs_operational": 2, 00:08:31.499 "base_bdevs_list": [ 00:08:31.499 { 00:08:31.499 "name": "BaseBdev1", 00:08:31.499 "uuid": "df2fbef0-1d23-5128-9809-fd35deace588", 00:08:31.499 "is_configured": true, 00:08:31.499 "data_offset": 2048, 00:08:31.499 "data_size": 63488 00:08:31.499 }, 00:08:31.499 { 00:08:31.499 "name": "BaseBdev2", 00:08:31.499 "uuid": "78cea63e-99e8-50f5-ae47-2a0107594516", 00:08:31.499 "is_configured": true, 00:08:31.499 "data_offset": 2048, 00:08:31.499 "data_size": 63488 00:08:31.499 } 00:08:31.499 ] 00:08:31.499 }' 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.499 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.758 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:31.758 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.758 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.758 [2024-11-29 07:40:21.678472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:31.758 [2024-11-29 07:40:21.678572] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.758 [2024-11-29 07:40:21.681267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.758 [2024-11-29 07:40:21.681349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.758 [2024-11-29 07:40:21.681400] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.758 [2024-11-29 07:40:21.681445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:31.758 { 00:08:31.758 "results": [ 00:08:31.758 { 00:08:31.758 "job": "raid_bdev1", 00:08:31.758 "core_mask": "0x1", 00:08:31.758 "workload": "randrw", 00:08:31.758 "percentage": 50, 00:08:31.758 "status": "finished", 00:08:31.758 "queue_depth": 1, 00:08:31.758 "io_size": 131072, 00:08:31.758 "runtime": 1.377052, 00:08:31.758 "iops": 16432.204448343273, 00:08:31.758 "mibps": 2054.025556042909, 00:08:31.758 "io_failed": 1, 00:08:31.758 "io_timeout": 0, 00:08:31.758 "avg_latency_us": 84.06047455047153, 00:08:31.758 "min_latency_us": 25.2646288209607, 00:08:31.758 "max_latency_us": 1409.4532751091704 00:08:31.758 } 00:08:31.758 ], 00:08:31.758 "core_count": 1 00:08:31.758 } 00:08:31.758 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.758 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62391 00:08:31.758 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62391 ']' 00:08:31.758 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62391 00:08:31.758 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:31.758 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.758 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62391 00:08:32.018 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.019 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.019 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62391' 00:08:32.019 killing process with pid 62391 00:08:32.019 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62391 00:08:32.019 [2024-11-29 07:40:21.728432] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.019 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62391 00:08:32.019 [2024-11-29 07:40:21.862926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.400 07:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MNbIxASwwn 00:08:33.400 07:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:33.400 07:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:33.400 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:33.400 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:33.400 ************************************ 00:08:33.400 END TEST raid_write_error_test 00:08:33.400 ************************************ 00:08:33.400 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.400 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.400 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:33.400 00:08:33.400 real 0m4.341s 00:08:33.400 user 0m5.182s 00:08:33.400 sys 0m0.582s 00:08:33.400 07:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.400 07:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.400 07:40:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:33.400 07:40:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:33.400 07:40:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:33.400 07:40:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.400 07:40:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.400 ************************************ 00:08:33.400 START TEST raid_state_function_test 00:08:33.400 ************************************ 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.400 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62529 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62529' 00:08:33.401 Process raid pid: 62529 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62529 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62529 ']' 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.401 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.401 [2024-11-29 07:40:23.173713] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:33.401 [2024-11-29 07:40:23.173889] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.661 [2024-11-29 07:40:23.346928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.661 [2024-11-29 07:40:23.457691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.921 [2024-11-29 07:40:23.653922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.921 [2024-11-29 07:40:23.654037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.179 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.179 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:34.179 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:34.179 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.179 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.179 [2024-11-29 07:40:23.996021] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.179 [2024-11-29 07:40:23.996156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.179 [2024-11-29 07:40:23.996189] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.179 [2024-11-29 07:40:23.996214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.179 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.179 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:34.179 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.179 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.179 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.179 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.180 "name": "Existed_Raid", 00:08:34.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.180 "strip_size_kb": 0, 00:08:34.180 "state": "configuring", 00:08:34.180 "raid_level": "raid1", 00:08:34.180 "superblock": false, 00:08:34.180 "num_base_bdevs": 2, 00:08:34.180 "num_base_bdevs_discovered": 0, 00:08:34.180 "num_base_bdevs_operational": 2, 00:08:34.180 "base_bdevs_list": [ 00:08:34.180 { 00:08:34.180 "name": "BaseBdev1", 00:08:34.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.180 "is_configured": false, 00:08:34.180 "data_offset": 0, 00:08:34.180 "data_size": 0 00:08:34.180 }, 00:08:34.180 { 00:08:34.180 "name": "BaseBdev2", 00:08:34.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.180 "is_configured": false, 00:08:34.180 "data_offset": 0, 00:08:34.180 "data_size": 0 00:08:34.180 } 00:08:34.180 ] 00:08:34.180 }' 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.180 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.749 [2024-11-29 07:40:24.423233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.749 [2024-11-29 07:40:24.423267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.749 [2024-11-29 07:40:24.435206] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.749 [2024-11-29 07:40:24.435245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.749 [2024-11-29 07:40:24.435253] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.749 [2024-11-29 07:40:24.435279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.749 [2024-11-29 07:40:24.483404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.749 BaseBdev1 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.749 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.749 [ 00:08:34.749 { 00:08:34.749 "name": "BaseBdev1", 00:08:34.749 "aliases": [ 00:08:34.749 "06dfcd21-04a3-47af-91d2-4a9f3d9cd61e" 00:08:34.749 ], 00:08:34.749 "product_name": "Malloc disk", 00:08:34.749 "block_size": 512, 00:08:34.749 "num_blocks": 65536, 00:08:34.749 "uuid": "06dfcd21-04a3-47af-91d2-4a9f3d9cd61e", 00:08:34.749 "assigned_rate_limits": { 00:08:34.749 "rw_ios_per_sec": 0, 00:08:34.749 "rw_mbytes_per_sec": 0, 00:08:34.749 "r_mbytes_per_sec": 0, 00:08:34.749 "w_mbytes_per_sec": 0 00:08:34.749 }, 00:08:34.749 "claimed": true, 00:08:34.749 "claim_type": "exclusive_write", 00:08:34.749 "zoned": false, 00:08:34.749 "supported_io_types": { 00:08:34.749 "read": true, 00:08:34.749 "write": true, 00:08:34.749 "unmap": true, 00:08:34.749 "flush": true, 00:08:34.749 "reset": true, 00:08:34.749 "nvme_admin": false, 00:08:34.749 "nvme_io": false, 00:08:34.749 "nvme_io_md": false, 00:08:34.749 "write_zeroes": true, 00:08:34.749 "zcopy": true, 00:08:34.749 "get_zone_info": false, 00:08:34.749 "zone_management": false, 00:08:34.749 "zone_append": false, 00:08:34.749 "compare": false, 00:08:34.749 "compare_and_write": false, 00:08:34.749 "abort": true, 00:08:34.749 "seek_hole": false, 00:08:34.749 "seek_data": false, 00:08:34.749 "copy": true, 00:08:34.749 "nvme_iov_md": false 00:08:34.749 }, 00:08:34.750 "memory_domains": [ 00:08:34.750 { 00:08:34.750 "dma_device_id": "system", 00:08:34.750 "dma_device_type": 1 00:08:34.750 }, 00:08:34.750 { 00:08:34.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.750 "dma_device_type": 2 00:08:34.750 } 00:08:34.750 ], 00:08:34.750 "driver_specific": {} 00:08:34.750 } 00:08:34.750 ] 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.750 "name": "Existed_Raid", 00:08:34.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.750 "strip_size_kb": 0, 00:08:34.750 "state": "configuring", 00:08:34.750 "raid_level": "raid1", 00:08:34.750 "superblock": false, 00:08:34.750 "num_base_bdevs": 2, 00:08:34.750 "num_base_bdevs_discovered": 1, 00:08:34.750 "num_base_bdevs_operational": 2, 00:08:34.750 "base_bdevs_list": [ 00:08:34.750 { 00:08:34.750 "name": "BaseBdev1", 00:08:34.750 "uuid": "06dfcd21-04a3-47af-91d2-4a9f3d9cd61e", 00:08:34.750 "is_configured": true, 00:08:34.750 "data_offset": 0, 00:08:34.750 "data_size": 65536 00:08:34.750 }, 00:08:34.750 { 00:08:34.750 "name": "BaseBdev2", 00:08:34.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.750 "is_configured": false, 00:08:34.750 "data_offset": 0, 00:08:34.750 "data_size": 0 00:08:34.750 } 00:08:34.750 ] 00:08:34.750 }' 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.750 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.318 [2024-11-29 07:40:24.982567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.318 [2024-11-29 07:40:24.982657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.318 [2024-11-29 07:40:24.990574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.318 [2024-11-29 07:40:24.992422] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.318 [2024-11-29 07:40:24.992495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.318 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.318 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.318 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.318 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.318 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.318 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.318 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.318 "name": "Existed_Raid", 00:08:35.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.318 "strip_size_kb": 0, 00:08:35.318 "state": "configuring", 00:08:35.318 "raid_level": "raid1", 00:08:35.318 "superblock": false, 00:08:35.318 "num_base_bdevs": 2, 00:08:35.318 "num_base_bdevs_discovered": 1, 00:08:35.318 "num_base_bdevs_operational": 2, 00:08:35.318 "base_bdevs_list": [ 00:08:35.318 { 00:08:35.318 "name": "BaseBdev1", 00:08:35.318 "uuid": "06dfcd21-04a3-47af-91d2-4a9f3d9cd61e", 00:08:35.318 "is_configured": true, 00:08:35.318 "data_offset": 0, 00:08:35.318 "data_size": 65536 00:08:35.318 }, 00:08:35.318 { 00:08:35.318 "name": "BaseBdev2", 00:08:35.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.318 "is_configured": false, 00:08:35.318 "data_offset": 0, 00:08:35.318 "data_size": 0 00:08:35.318 } 00:08:35.318 ] 00:08:35.318 }' 00:08:35.318 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.318 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.577 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:35.577 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.577 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.577 [2024-11-29 07:40:25.457576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.577 [2024-11-29 07:40:25.457627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:35.577 [2024-11-29 07:40:25.457635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:35.578 [2024-11-29 07:40:25.457866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:35.578 [2024-11-29 07:40:25.458031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:35.578 [2024-11-29 07:40:25.458044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:35.578 [2024-11-29 07:40:25.458410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.578 BaseBdev2 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.578 [ 00:08:35.578 { 00:08:35.578 "name": "BaseBdev2", 00:08:35.578 "aliases": [ 00:08:35.578 "5fde22f6-2efc-405c-914b-59f1e748b08f" 00:08:35.578 ], 00:08:35.578 "product_name": "Malloc disk", 00:08:35.578 "block_size": 512, 00:08:35.578 "num_blocks": 65536, 00:08:35.578 "uuid": "5fde22f6-2efc-405c-914b-59f1e748b08f", 00:08:35.578 "assigned_rate_limits": { 00:08:35.578 "rw_ios_per_sec": 0, 00:08:35.578 "rw_mbytes_per_sec": 0, 00:08:35.578 "r_mbytes_per_sec": 0, 00:08:35.578 "w_mbytes_per_sec": 0 00:08:35.578 }, 00:08:35.578 "claimed": true, 00:08:35.578 "claim_type": "exclusive_write", 00:08:35.578 "zoned": false, 00:08:35.578 "supported_io_types": { 00:08:35.578 "read": true, 00:08:35.578 "write": true, 00:08:35.578 "unmap": true, 00:08:35.578 "flush": true, 00:08:35.578 "reset": true, 00:08:35.578 "nvme_admin": false, 00:08:35.578 "nvme_io": false, 00:08:35.578 "nvme_io_md": false, 00:08:35.578 "write_zeroes": true, 00:08:35.578 "zcopy": true, 00:08:35.578 "get_zone_info": false, 00:08:35.578 "zone_management": false, 00:08:35.578 "zone_append": false, 00:08:35.578 "compare": false, 00:08:35.578 "compare_and_write": false, 00:08:35.578 "abort": true, 00:08:35.578 "seek_hole": false, 00:08:35.578 "seek_data": false, 00:08:35.578 "copy": true, 00:08:35.578 "nvme_iov_md": false 00:08:35.578 }, 00:08:35.578 "memory_domains": [ 00:08:35.578 { 00:08:35.578 "dma_device_id": "system", 00:08:35.578 "dma_device_type": 1 00:08:35.578 }, 00:08:35.578 { 00:08:35.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.578 "dma_device_type": 2 00:08:35.578 } 00:08:35.578 ], 00:08:35.578 "driver_specific": {} 00:08:35.578 } 00:08:35.578 ] 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.578 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.838 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.838 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.838 "name": "Existed_Raid", 00:08:35.838 "uuid": "77636cff-d1ec-47a8-b6bd-1bc4f117a541", 00:08:35.838 "strip_size_kb": 0, 00:08:35.838 "state": "online", 00:08:35.838 "raid_level": "raid1", 00:08:35.838 "superblock": false, 00:08:35.838 "num_base_bdevs": 2, 00:08:35.838 "num_base_bdevs_discovered": 2, 00:08:35.838 "num_base_bdevs_operational": 2, 00:08:35.838 "base_bdevs_list": [ 00:08:35.838 { 00:08:35.838 "name": "BaseBdev1", 00:08:35.838 "uuid": "06dfcd21-04a3-47af-91d2-4a9f3d9cd61e", 00:08:35.838 "is_configured": true, 00:08:35.838 "data_offset": 0, 00:08:35.838 "data_size": 65536 00:08:35.838 }, 00:08:35.838 { 00:08:35.838 "name": "BaseBdev2", 00:08:35.838 "uuid": "5fde22f6-2efc-405c-914b-59f1e748b08f", 00:08:35.838 "is_configured": true, 00:08:35.838 "data_offset": 0, 00:08:35.838 "data_size": 65536 00:08:35.838 } 00:08:35.838 ] 00:08:35.838 }' 00:08:35.838 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.838 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.097 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:36.097 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:36.097 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.097 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.097 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.097 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.098 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:36.098 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.098 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.098 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.098 [2024-11-29 07:40:25.929044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.098 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.098 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.098 "name": "Existed_Raid", 00:08:36.098 "aliases": [ 00:08:36.098 "77636cff-d1ec-47a8-b6bd-1bc4f117a541" 00:08:36.098 ], 00:08:36.098 "product_name": "Raid Volume", 00:08:36.098 "block_size": 512, 00:08:36.098 "num_blocks": 65536, 00:08:36.098 "uuid": "77636cff-d1ec-47a8-b6bd-1bc4f117a541", 00:08:36.098 "assigned_rate_limits": { 00:08:36.098 "rw_ios_per_sec": 0, 00:08:36.098 "rw_mbytes_per_sec": 0, 00:08:36.098 "r_mbytes_per_sec": 0, 00:08:36.098 "w_mbytes_per_sec": 0 00:08:36.098 }, 00:08:36.098 "claimed": false, 00:08:36.098 "zoned": false, 00:08:36.098 "supported_io_types": { 00:08:36.098 "read": true, 00:08:36.098 "write": true, 00:08:36.098 "unmap": false, 00:08:36.098 "flush": false, 00:08:36.098 "reset": true, 00:08:36.098 "nvme_admin": false, 00:08:36.098 "nvme_io": false, 00:08:36.098 "nvme_io_md": false, 00:08:36.098 "write_zeroes": true, 00:08:36.098 "zcopy": false, 00:08:36.098 "get_zone_info": false, 00:08:36.098 "zone_management": false, 00:08:36.098 "zone_append": false, 00:08:36.098 "compare": false, 00:08:36.098 "compare_and_write": false, 00:08:36.098 "abort": false, 00:08:36.098 "seek_hole": false, 00:08:36.098 "seek_data": false, 00:08:36.098 "copy": false, 00:08:36.098 "nvme_iov_md": false 00:08:36.098 }, 00:08:36.098 "memory_domains": [ 00:08:36.098 { 00:08:36.098 "dma_device_id": "system", 00:08:36.098 "dma_device_type": 1 00:08:36.098 }, 00:08:36.098 { 00:08:36.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.098 "dma_device_type": 2 00:08:36.098 }, 00:08:36.098 { 00:08:36.098 "dma_device_id": "system", 00:08:36.098 "dma_device_type": 1 00:08:36.098 }, 00:08:36.098 { 00:08:36.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.098 "dma_device_type": 2 00:08:36.098 } 00:08:36.098 ], 00:08:36.098 "driver_specific": { 00:08:36.098 "raid": { 00:08:36.098 "uuid": "77636cff-d1ec-47a8-b6bd-1bc4f117a541", 00:08:36.098 "strip_size_kb": 0, 00:08:36.098 "state": "online", 00:08:36.098 "raid_level": "raid1", 00:08:36.098 "superblock": false, 00:08:36.098 "num_base_bdevs": 2, 00:08:36.098 "num_base_bdevs_discovered": 2, 00:08:36.098 "num_base_bdevs_operational": 2, 00:08:36.098 "base_bdevs_list": [ 00:08:36.098 { 00:08:36.098 "name": "BaseBdev1", 00:08:36.098 "uuid": "06dfcd21-04a3-47af-91d2-4a9f3d9cd61e", 00:08:36.098 "is_configured": true, 00:08:36.098 "data_offset": 0, 00:08:36.098 "data_size": 65536 00:08:36.098 }, 00:08:36.098 { 00:08:36.098 "name": "BaseBdev2", 00:08:36.098 "uuid": "5fde22f6-2efc-405c-914b-59f1e748b08f", 00:08:36.098 "is_configured": true, 00:08:36.098 "data_offset": 0, 00:08:36.098 "data_size": 65536 00:08:36.098 } 00:08:36.098 ] 00:08:36.098 } 00:08:36.098 } 00:08:36.098 }' 00:08:36.098 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.098 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:36.098 BaseBdev2' 00:08:36.098 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.359 [2024-11-29 07:40:26.152453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.359 "name": "Existed_Raid", 00:08:36.359 "uuid": "77636cff-d1ec-47a8-b6bd-1bc4f117a541", 00:08:36.359 "strip_size_kb": 0, 00:08:36.359 "state": "online", 00:08:36.359 "raid_level": "raid1", 00:08:36.359 "superblock": false, 00:08:36.359 "num_base_bdevs": 2, 00:08:36.359 "num_base_bdevs_discovered": 1, 00:08:36.359 "num_base_bdevs_operational": 1, 00:08:36.359 "base_bdevs_list": [ 00:08:36.359 { 00:08:36.359 "name": null, 00:08:36.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.359 "is_configured": false, 00:08:36.359 "data_offset": 0, 00:08:36.359 "data_size": 65536 00:08:36.359 }, 00:08:36.359 { 00:08:36.359 "name": "BaseBdev2", 00:08:36.359 "uuid": "5fde22f6-2efc-405c-914b-59f1e748b08f", 00:08:36.359 "is_configured": true, 00:08:36.359 "data_offset": 0, 00:08:36.359 "data_size": 65536 00:08:36.359 } 00:08:36.359 ] 00:08:36.359 }' 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.359 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.929 [2024-11-29 07:40:26.743295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.929 [2024-11-29 07:40:26.743394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.929 [2024-11-29 07:40:26.833856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.929 [2024-11-29 07:40:26.833907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.929 [2024-11-29 07:40:26.833917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.929 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62529 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62529 ']' 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62529 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62529 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.190 killing process with pid 62529 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62529' 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62529 00:08:37.190 [2024-11-29 07:40:26.928627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.190 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62529 00:08:37.190 [2024-11-29 07:40:26.944887] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.129 07:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:38.129 00:08:38.129 real 0m4.937s 00:08:38.129 user 0m7.131s 00:08:38.129 sys 0m0.801s 00:08:38.129 ************************************ 00:08:38.129 END TEST raid_state_function_test 00:08:38.129 ************************************ 00:08:38.129 07:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.130 07:40:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:38.130 07:40:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:38.130 07:40:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.130 07:40:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.130 ************************************ 00:08:38.130 START TEST raid_state_function_test_sb 00:08:38.130 ************************************ 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.130 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62782 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62782' 00:08:38.389 Process raid pid: 62782 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62782 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62782 ']' 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.389 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.389 [2024-11-29 07:40:28.162127] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:38.389 [2024-11-29 07:40:28.162237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.389 [2024-11-29 07:40:28.318665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.649 [2024-11-29 07:40:28.424900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.906 [2024-11-29 07:40:28.624271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.906 [2024-11-29 07:40:28.624301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.165 [2024-11-29 07:40:28.982840] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.165 [2024-11-29 07:40:28.982898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.165 [2024-11-29 07:40:28.982909] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.165 [2024-11-29 07:40:28.982919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.165 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.165 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.165 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.165 "name": "Existed_Raid", 00:08:39.165 "uuid": "39d550d7-a144-446d-9cc7-33f298287c1e", 00:08:39.165 "strip_size_kb": 0, 00:08:39.165 "state": "configuring", 00:08:39.165 "raid_level": "raid1", 00:08:39.165 "superblock": true, 00:08:39.165 "num_base_bdevs": 2, 00:08:39.165 "num_base_bdevs_discovered": 0, 00:08:39.165 "num_base_bdevs_operational": 2, 00:08:39.165 "base_bdevs_list": [ 00:08:39.165 { 00:08:39.165 "name": "BaseBdev1", 00:08:39.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.165 "is_configured": false, 00:08:39.165 "data_offset": 0, 00:08:39.165 "data_size": 0 00:08:39.165 }, 00:08:39.165 { 00:08:39.165 "name": "BaseBdev2", 00:08:39.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.165 "is_configured": false, 00:08:39.165 "data_offset": 0, 00:08:39.165 "data_size": 0 00:08:39.165 } 00:08:39.165 ] 00:08:39.165 }' 00:08:39.165 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.165 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.733 [2024-11-29 07:40:29.418014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.733 [2024-11-29 07:40:29.418104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.733 [2024-11-29 07:40:29.430003] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.733 [2024-11-29 07:40:29.430082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.733 [2024-11-29 07:40:29.430123] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.733 [2024-11-29 07:40:29.430149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.733 [2024-11-29 07:40:29.477023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.733 BaseBdev1 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.733 [ 00:08:39.733 { 00:08:39.733 "name": "BaseBdev1", 00:08:39.733 "aliases": [ 00:08:39.733 "36707244-c635-4f6d-8da8-d901a576e905" 00:08:39.733 ], 00:08:39.733 "product_name": "Malloc disk", 00:08:39.733 "block_size": 512, 00:08:39.733 "num_blocks": 65536, 00:08:39.733 "uuid": "36707244-c635-4f6d-8da8-d901a576e905", 00:08:39.733 "assigned_rate_limits": { 00:08:39.733 "rw_ios_per_sec": 0, 00:08:39.733 "rw_mbytes_per_sec": 0, 00:08:39.733 "r_mbytes_per_sec": 0, 00:08:39.733 "w_mbytes_per_sec": 0 00:08:39.733 }, 00:08:39.733 "claimed": true, 00:08:39.733 "claim_type": "exclusive_write", 00:08:39.733 "zoned": false, 00:08:39.733 "supported_io_types": { 00:08:39.733 "read": true, 00:08:39.733 "write": true, 00:08:39.733 "unmap": true, 00:08:39.733 "flush": true, 00:08:39.733 "reset": true, 00:08:39.733 "nvme_admin": false, 00:08:39.733 "nvme_io": false, 00:08:39.733 "nvme_io_md": false, 00:08:39.733 "write_zeroes": true, 00:08:39.733 "zcopy": true, 00:08:39.733 "get_zone_info": false, 00:08:39.733 "zone_management": false, 00:08:39.733 "zone_append": false, 00:08:39.733 "compare": false, 00:08:39.733 "compare_and_write": false, 00:08:39.733 "abort": true, 00:08:39.733 "seek_hole": false, 00:08:39.733 "seek_data": false, 00:08:39.733 "copy": true, 00:08:39.733 "nvme_iov_md": false 00:08:39.733 }, 00:08:39.733 "memory_domains": [ 00:08:39.733 { 00:08:39.733 "dma_device_id": "system", 00:08:39.733 "dma_device_type": 1 00:08:39.733 }, 00:08:39.733 { 00:08:39.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.733 "dma_device_type": 2 00:08:39.733 } 00:08:39.733 ], 00:08:39.733 "driver_specific": {} 00:08:39.733 } 00:08:39.733 ] 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.733 "name": "Existed_Raid", 00:08:39.733 "uuid": "133694a7-d20f-432c-b8a8-bed324f059cb", 00:08:39.733 "strip_size_kb": 0, 00:08:39.733 "state": "configuring", 00:08:39.733 "raid_level": "raid1", 00:08:39.733 "superblock": true, 00:08:39.733 "num_base_bdevs": 2, 00:08:39.733 "num_base_bdevs_discovered": 1, 00:08:39.733 "num_base_bdevs_operational": 2, 00:08:39.733 "base_bdevs_list": [ 00:08:39.733 { 00:08:39.733 "name": "BaseBdev1", 00:08:39.733 "uuid": "36707244-c635-4f6d-8da8-d901a576e905", 00:08:39.733 "is_configured": true, 00:08:39.733 "data_offset": 2048, 00:08:39.733 "data_size": 63488 00:08:39.733 }, 00:08:39.733 { 00:08:39.733 "name": "BaseBdev2", 00:08:39.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.733 "is_configured": false, 00:08:39.733 "data_offset": 0, 00:08:39.733 "data_size": 0 00:08:39.733 } 00:08:39.733 ] 00:08:39.733 }' 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.733 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.302 [2024-11-29 07:40:29.948237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.302 [2024-11-29 07:40:29.948283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.302 [2024-11-29 07:40:29.960260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.302 [2024-11-29 07:40:29.962015] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.302 [2024-11-29 07:40:29.962057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.302 "name": "Existed_Raid", 00:08:40.302 "uuid": "c45419a7-6ff3-4ca8-99c1-55e660fcfe85", 00:08:40.302 "strip_size_kb": 0, 00:08:40.302 "state": "configuring", 00:08:40.302 "raid_level": "raid1", 00:08:40.302 "superblock": true, 00:08:40.302 "num_base_bdevs": 2, 00:08:40.302 "num_base_bdevs_discovered": 1, 00:08:40.302 "num_base_bdevs_operational": 2, 00:08:40.302 "base_bdevs_list": [ 00:08:40.302 { 00:08:40.302 "name": "BaseBdev1", 00:08:40.302 "uuid": "36707244-c635-4f6d-8da8-d901a576e905", 00:08:40.302 "is_configured": true, 00:08:40.302 "data_offset": 2048, 00:08:40.302 "data_size": 63488 00:08:40.302 }, 00:08:40.302 { 00:08:40.302 "name": "BaseBdev2", 00:08:40.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.302 "is_configured": false, 00:08:40.302 "data_offset": 0, 00:08:40.302 "data_size": 0 00:08:40.302 } 00:08:40.302 ] 00:08:40.302 }' 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.302 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.562 [2024-11-29 07:40:30.395877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.562 [2024-11-29 07:40:30.396242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:40.562 [2024-11-29 07:40:30.396297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:40.562 [2024-11-29 07:40:30.396582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:40.562 BaseBdev2 00:08:40.562 [2024-11-29 07:40:30.396804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:40.562 [2024-11-29 07:40:30.396820] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:40.562 [2024-11-29 07:40:30.396958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.562 [ 00:08:40.562 { 00:08:40.562 "name": "BaseBdev2", 00:08:40.562 "aliases": [ 00:08:40.562 "7e3564a3-b002-4a8a-b488-86480bcf9bf2" 00:08:40.562 ], 00:08:40.562 "product_name": "Malloc disk", 00:08:40.562 "block_size": 512, 00:08:40.562 "num_blocks": 65536, 00:08:40.562 "uuid": "7e3564a3-b002-4a8a-b488-86480bcf9bf2", 00:08:40.562 "assigned_rate_limits": { 00:08:40.562 "rw_ios_per_sec": 0, 00:08:40.562 "rw_mbytes_per_sec": 0, 00:08:40.562 "r_mbytes_per_sec": 0, 00:08:40.562 "w_mbytes_per_sec": 0 00:08:40.562 }, 00:08:40.562 "claimed": true, 00:08:40.562 "claim_type": "exclusive_write", 00:08:40.562 "zoned": false, 00:08:40.562 "supported_io_types": { 00:08:40.562 "read": true, 00:08:40.562 "write": true, 00:08:40.562 "unmap": true, 00:08:40.562 "flush": true, 00:08:40.562 "reset": true, 00:08:40.562 "nvme_admin": false, 00:08:40.562 "nvme_io": false, 00:08:40.562 "nvme_io_md": false, 00:08:40.562 "write_zeroes": true, 00:08:40.562 "zcopy": true, 00:08:40.562 "get_zone_info": false, 00:08:40.562 "zone_management": false, 00:08:40.562 "zone_append": false, 00:08:40.562 "compare": false, 00:08:40.562 "compare_and_write": false, 00:08:40.562 "abort": true, 00:08:40.562 "seek_hole": false, 00:08:40.562 "seek_data": false, 00:08:40.562 "copy": true, 00:08:40.562 "nvme_iov_md": false 00:08:40.562 }, 00:08:40.562 "memory_domains": [ 00:08:40.562 { 00:08:40.562 "dma_device_id": "system", 00:08:40.562 "dma_device_type": 1 00:08:40.562 }, 00:08:40.562 { 00:08:40.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.562 "dma_device_type": 2 00:08:40.562 } 00:08:40.562 ], 00:08:40.562 "driver_specific": {} 00:08:40.562 } 00:08:40.562 ] 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.562 "name": "Existed_Raid", 00:08:40.562 "uuid": "c45419a7-6ff3-4ca8-99c1-55e660fcfe85", 00:08:40.562 "strip_size_kb": 0, 00:08:40.562 "state": "online", 00:08:40.562 "raid_level": "raid1", 00:08:40.562 "superblock": true, 00:08:40.562 "num_base_bdevs": 2, 00:08:40.562 "num_base_bdevs_discovered": 2, 00:08:40.562 "num_base_bdevs_operational": 2, 00:08:40.562 "base_bdevs_list": [ 00:08:40.562 { 00:08:40.562 "name": "BaseBdev1", 00:08:40.562 "uuid": "36707244-c635-4f6d-8da8-d901a576e905", 00:08:40.562 "is_configured": true, 00:08:40.562 "data_offset": 2048, 00:08:40.562 "data_size": 63488 00:08:40.562 }, 00:08:40.562 { 00:08:40.562 "name": "BaseBdev2", 00:08:40.562 "uuid": "7e3564a3-b002-4a8a-b488-86480bcf9bf2", 00:08:40.562 "is_configured": true, 00:08:40.562 "data_offset": 2048, 00:08:40.562 "data_size": 63488 00:08:40.562 } 00:08:40.562 ] 00:08:40.562 }' 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.562 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.130 [2024-11-29 07:40:30.855416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.130 "name": "Existed_Raid", 00:08:41.130 "aliases": [ 00:08:41.130 "c45419a7-6ff3-4ca8-99c1-55e660fcfe85" 00:08:41.130 ], 00:08:41.130 "product_name": "Raid Volume", 00:08:41.130 "block_size": 512, 00:08:41.130 "num_blocks": 63488, 00:08:41.130 "uuid": "c45419a7-6ff3-4ca8-99c1-55e660fcfe85", 00:08:41.130 "assigned_rate_limits": { 00:08:41.130 "rw_ios_per_sec": 0, 00:08:41.130 "rw_mbytes_per_sec": 0, 00:08:41.130 "r_mbytes_per_sec": 0, 00:08:41.130 "w_mbytes_per_sec": 0 00:08:41.130 }, 00:08:41.130 "claimed": false, 00:08:41.130 "zoned": false, 00:08:41.130 "supported_io_types": { 00:08:41.130 "read": true, 00:08:41.130 "write": true, 00:08:41.130 "unmap": false, 00:08:41.130 "flush": false, 00:08:41.130 "reset": true, 00:08:41.130 "nvme_admin": false, 00:08:41.130 "nvme_io": false, 00:08:41.130 "nvme_io_md": false, 00:08:41.130 "write_zeroes": true, 00:08:41.130 "zcopy": false, 00:08:41.130 "get_zone_info": false, 00:08:41.130 "zone_management": false, 00:08:41.130 "zone_append": false, 00:08:41.130 "compare": false, 00:08:41.130 "compare_and_write": false, 00:08:41.130 "abort": false, 00:08:41.130 "seek_hole": false, 00:08:41.130 "seek_data": false, 00:08:41.130 "copy": false, 00:08:41.130 "nvme_iov_md": false 00:08:41.130 }, 00:08:41.130 "memory_domains": [ 00:08:41.130 { 00:08:41.130 "dma_device_id": "system", 00:08:41.130 "dma_device_type": 1 00:08:41.130 }, 00:08:41.130 { 00:08:41.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.130 "dma_device_type": 2 00:08:41.130 }, 00:08:41.130 { 00:08:41.130 "dma_device_id": "system", 00:08:41.130 "dma_device_type": 1 00:08:41.130 }, 00:08:41.130 { 00:08:41.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.130 "dma_device_type": 2 00:08:41.130 } 00:08:41.130 ], 00:08:41.130 "driver_specific": { 00:08:41.130 "raid": { 00:08:41.130 "uuid": "c45419a7-6ff3-4ca8-99c1-55e660fcfe85", 00:08:41.130 "strip_size_kb": 0, 00:08:41.130 "state": "online", 00:08:41.130 "raid_level": "raid1", 00:08:41.130 "superblock": true, 00:08:41.130 "num_base_bdevs": 2, 00:08:41.130 "num_base_bdevs_discovered": 2, 00:08:41.130 "num_base_bdevs_operational": 2, 00:08:41.130 "base_bdevs_list": [ 00:08:41.130 { 00:08:41.130 "name": "BaseBdev1", 00:08:41.130 "uuid": "36707244-c635-4f6d-8da8-d901a576e905", 00:08:41.130 "is_configured": true, 00:08:41.130 "data_offset": 2048, 00:08:41.130 "data_size": 63488 00:08:41.130 }, 00:08:41.130 { 00:08:41.130 "name": "BaseBdev2", 00:08:41.130 "uuid": "7e3564a3-b002-4a8a-b488-86480bcf9bf2", 00:08:41.130 "is_configured": true, 00:08:41.130 "data_offset": 2048, 00:08:41.130 "data_size": 63488 00:08:41.130 } 00:08:41.130 ] 00:08:41.130 } 00:08:41.130 } 00:08:41.130 }' 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:41.130 BaseBdev2' 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.130 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.131 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:41.131 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.131 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.131 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.131 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.131 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.131 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.131 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.131 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.131 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.131 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.131 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.131 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.390 [2024-11-29 07:40:31.102728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.390 "name": "Existed_Raid", 00:08:41.390 "uuid": "c45419a7-6ff3-4ca8-99c1-55e660fcfe85", 00:08:41.390 "strip_size_kb": 0, 00:08:41.390 "state": "online", 00:08:41.390 "raid_level": "raid1", 00:08:41.390 "superblock": true, 00:08:41.390 "num_base_bdevs": 2, 00:08:41.390 "num_base_bdevs_discovered": 1, 00:08:41.390 "num_base_bdevs_operational": 1, 00:08:41.390 "base_bdevs_list": [ 00:08:41.390 { 00:08:41.390 "name": null, 00:08:41.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.390 "is_configured": false, 00:08:41.390 "data_offset": 0, 00:08:41.390 "data_size": 63488 00:08:41.390 }, 00:08:41.390 { 00:08:41.390 "name": "BaseBdev2", 00:08:41.390 "uuid": "7e3564a3-b002-4a8a-b488-86480bcf9bf2", 00:08:41.390 "is_configured": true, 00:08:41.390 "data_offset": 2048, 00:08:41.390 "data_size": 63488 00:08:41.390 } 00:08:41.390 ] 00:08:41.390 }' 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.390 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.971 [2024-11-29 07:40:31.728877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.971 [2024-11-29 07:40:31.728990] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.971 [2024-11-29 07:40:31.819823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.971 [2024-11-29 07:40:31.819879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.971 [2024-11-29 07:40:31.819891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62782 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62782 ']' 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62782 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62782 00:08:41.971 killing process with pid 62782 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62782' 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62782 00:08:41.971 [2024-11-29 07:40:31.913705] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.971 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62782 00:08:42.230 [2024-11-29 07:40:31.930955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.167 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:43.167 00:08:43.167 real 0m4.930s 00:08:43.167 user 0m7.157s 00:08:43.167 sys 0m0.761s 00:08:43.167 ************************************ 00:08:43.167 END TEST raid_state_function_test_sb 00:08:43.167 ************************************ 00:08:43.167 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.167 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.167 07:40:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:43.167 07:40:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:43.167 07:40:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.167 07:40:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.167 ************************************ 00:08:43.167 START TEST raid_superblock_test 00:08:43.167 ************************************ 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63034 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63034 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63034 ']' 00:08:43.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.167 07:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.426 [2024-11-29 07:40:33.157631] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:43.426 [2024-11-29 07:40:33.157841] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63034 ] 00:08:43.426 [2024-11-29 07:40:33.325210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.685 [2024-11-29 07:40:33.431678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.685 [2024-11-29 07:40:33.625986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.685 [2024-11-29 07:40:33.626152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.252 07:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.252 malloc1 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.252 [2024-11-29 07:40:34.026858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:44.252 [2024-11-29 07:40:34.026916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.252 [2024-11-29 07:40:34.026937] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:44.252 [2024-11-29 07:40:34.026946] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.252 [2024-11-29 07:40:34.029048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.252 [2024-11-29 07:40:34.029086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:44.252 pt1 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:44.252 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.253 malloc2 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.253 [2024-11-29 07:40:34.079134] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.253 [2024-11-29 07:40:34.079231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.253 [2024-11-29 07:40:34.079290] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:44.253 [2024-11-29 07:40:34.079318] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.253 [2024-11-29 07:40:34.081472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.253 [2024-11-29 07:40:34.081540] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.253 pt2 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.253 [2024-11-29 07:40:34.091167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:44.253 [2024-11-29 07:40:34.093006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.253 [2024-11-29 07:40:34.093240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:44.253 [2024-11-29 07:40:34.093287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:44.253 [2024-11-29 07:40:34.093536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:44.253 [2024-11-29 07:40:34.093731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:44.253 [2024-11-29 07:40:34.093779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:44.253 [2024-11-29 07:40:34.093954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.253 "name": "raid_bdev1", 00:08:44.253 "uuid": "e80390c7-9206-452f-b687-f11a35c0d667", 00:08:44.253 "strip_size_kb": 0, 00:08:44.253 "state": "online", 00:08:44.253 "raid_level": "raid1", 00:08:44.253 "superblock": true, 00:08:44.253 "num_base_bdevs": 2, 00:08:44.253 "num_base_bdevs_discovered": 2, 00:08:44.253 "num_base_bdevs_operational": 2, 00:08:44.253 "base_bdevs_list": [ 00:08:44.253 { 00:08:44.253 "name": "pt1", 00:08:44.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.253 "is_configured": true, 00:08:44.253 "data_offset": 2048, 00:08:44.253 "data_size": 63488 00:08:44.253 }, 00:08:44.253 { 00:08:44.253 "name": "pt2", 00:08:44.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.253 "is_configured": true, 00:08:44.253 "data_offset": 2048, 00:08:44.253 "data_size": 63488 00:08:44.253 } 00:08:44.253 ] 00:08:44.253 }' 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.253 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.821 [2024-11-29 07:40:34.486693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.821 "name": "raid_bdev1", 00:08:44.821 "aliases": [ 00:08:44.821 "e80390c7-9206-452f-b687-f11a35c0d667" 00:08:44.821 ], 00:08:44.821 "product_name": "Raid Volume", 00:08:44.821 "block_size": 512, 00:08:44.821 "num_blocks": 63488, 00:08:44.821 "uuid": "e80390c7-9206-452f-b687-f11a35c0d667", 00:08:44.821 "assigned_rate_limits": { 00:08:44.821 "rw_ios_per_sec": 0, 00:08:44.821 "rw_mbytes_per_sec": 0, 00:08:44.821 "r_mbytes_per_sec": 0, 00:08:44.821 "w_mbytes_per_sec": 0 00:08:44.821 }, 00:08:44.821 "claimed": false, 00:08:44.821 "zoned": false, 00:08:44.821 "supported_io_types": { 00:08:44.821 "read": true, 00:08:44.821 "write": true, 00:08:44.821 "unmap": false, 00:08:44.821 "flush": false, 00:08:44.821 "reset": true, 00:08:44.821 "nvme_admin": false, 00:08:44.821 "nvme_io": false, 00:08:44.821 "nvme_io_md": false, 00:08:44.821 "write_zeroes": true, 00:08:44.821 "zcopy": false, 00:08:44.821 "get_zone_info": false, 00:08:44.821 "zone_management": false, 00:08:44.821 "zone_append": false, 00:08:44.821 "compare": false, 00:08:44.821 "compare_and_write": false, 00:08:44.821 "abort": false, 00:08:44.821 "seek_hole": false, 00:08:44.821 "seek_data": false, 00:08:44.821 "copy": false, 00:08:44.821 "nvme_iov_md": false 00:08:44.821 }, 00:08:44.821 "memory_domains": [ 00:08:44.821 { 00:08:44.821 "dma_device_id": "system", 00:08:44.821 "dma_device_type": 1 00:08:44.821 }, 00:08:44.821 { 00:08:44.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.821 "dma_device_type": 2 00:08:44.821 }, 00:08:44.821 { 00:08:44.821 "dma_device_id": "system", 00:08:44.821 "dma_device_type": 1 00:08:44.821 }, 00:08:44.821 { 00:08:44.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.821 "dma_device_type": 2 00:08:44.821 } 00:08:44.821 ], 00:08:44.821 "driver_specific": { 00:08:44.821 "raid": { 00:08:44.821 "uuid": "e80390c7-9206-452f-b687-f11a35c0d667", 00:08:44.821 "strip_size_kb": 0, 00:08:44.821 "state": "online", 00:08:44.821 "raid_level": "raid1", 00:08:44.821 "superblock": true, 00:08:44.821 "num_base_bdevs": 2, 00:08:44.821 "num_base_bdevs_discovered": 2, 00:08:44.821 "num_base_bdevs_operational": 2, 00:08:44.821 "base_bdevs_list": [ 00:08:44.821 { 00:08:44.821 "name": "pt1", 00:08:44.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.821 "is_configured": true, 00:08:44.821 "data_offset": 2048, 00:08:44.821 "data_size": 63488 00:08:44.821 }, 00:08:44.821 { 00:08:44.821 "name": "pt2", 00:08:44.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.821 "is_configured": true, 00:08:44.821 "data_offset": 2048, 00:08:44.821 "data_size": 63488 00:08:44.821 } 00:08:44.821 ] 00:08:44.821 } 00:08:44.821 } 00:08:44.821 }' 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:44.821 pt2' 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:44.821 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.822 [2024-11-29 07:40:34.702402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e80390c7-9206-452f-b687-f11a35c0d667 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e80390c7-9206-452f-b687-f11a35c0d667 ']' 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.822 [2024-11-29 07:40:34.730016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.822 [2024-11-29 07:40:34.730044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.822 [2024-11-29 07:40:34.730133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.822 [2024-11-29 07:40:34.730190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.822 [2024-11-29 07:40:34.730202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.822 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 [2024-11-29 07:40:34.861814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:45.081 [2024-11-29 07:40:34.863608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:45.081 [2024-11-29 07:40:34.863676] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:45.081 [2024-11-29 07:40:34.863719] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:45.081 [2024-11-29 07:40:34.863734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:45.081 [2024-11-29 07:40:34.863760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:45.081 request: 00:08:45.081 { 00:08:45.081 "name": "raid_bdev1", 00:08:45.081 "raid_level": "raid1", 00:08:45.081 "base_bdevs": [ 00:08:45.081 "malloc1", 00:08:45.081 "malloc2" 00:08:45.081 ], 00:08:45.081 "superblock": false, 00:08:45.081 "method": "bdev_raid_create", 00:08:45.081 "req_id": 1 00:08:45.081 } 00:08:45.081 Got JSON-RPC error response 00:08:45.081 response: 00:08:45.081 { 00:08:45.081 "code": -17, 00:08:45.081 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:45.081 } 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 [2024-11-29 07:40:34.925690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:45.081 [2024-11-29 07:40:34.925735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.081 [2024-11-29 07:40:34.925752] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:45.081 [2024-11-29 07:40:34.925762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.081 [2024-11-29 07:40:34.927876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.081 [2024-11-29 07:40:34.927912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:45.081 [2024-11-29 07:40:34.927987] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:45.081 [2024-11-29 07:40:34.928037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:45.081 pt1 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.081 "name": "raid_bdev1", 00:08:45.081 "uuid": "e80390c7-9206-452f-b687-f11a35c0d667", 00:08:45.081 "strip_size_kb": 0, 00:08:45.081 "state": "configuring", 00:08:45.081 "raid_level": "raid1", 00:08:45.081 "superblock": true, 00:08:45.081 "num_base_bdevs": 2, 00:08:45.081 "num_base_bdevs_discovered": 1, 00:08:45.081 "num_base_bdevs_operational": 2, 00:08:45.081 "base_bdevs_list": [ 00:08:45.081 { 00:08:45.081 "name": "pt1", 00:08:45.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.081 "is_configured": true, 00:08:45.081 "data_offset": 2048, 00:08:45.081 "data_size": 63488 00:08:45.081 }, 00:08:45.081 { 00:08:45.081 "name": null, 00:08:45.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.081 "is_configured": false, 00:08:45.081 "data_offset": 2048, 00:08:45.081 "data_size": 63488 00:08:45.081 } 00:08:45.081 ] 00:08:45.081 }' 00:08:45.082 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.082 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.647 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:45.647 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:45.647 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.647 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.647 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.647 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.647 [2024-11-29 07:40:35.344999] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.647 [2024-11-29 07:40:35.345057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.647 [2024-11-29 07:40:35.345078] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:45.647 [2024-11-29 07:40:35.345088] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.647 [2024-11-29 07:40:35.345512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.647 [2024-11-29 07:40:35.345532] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.647 [2024-11-29 07:40:35.345603] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:45.647 [2024-11-29 07:40:35.345625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.647 [2024-11-29 07:40:35.345751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:45.647 [2024-11-29 07:40:35.345762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:45.647 [2024-11-29 07:40:35.345991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:45.647 [2024-11-29 07:40:35.346178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:45.647 [2024-11-29 07:40:35.346195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:45.647 [2024-11-29 07:40:35.346344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.647 pt2 00:08:45.647 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.648 "name": "raid_bdev1", 00:08:45.648 "uuid": "e80390c7-9206-452f-b687-f11a35c0d667", 00:08:45.648 "strip_size_kb": 0, 00:08:45.648 "state": "online", 00:08:45.648 "raid_level": "raid1", 00:08:45.648 "superblock": true, 00:08:45.648 "num_base_bdevs": 2, 00:08:45.648 "num_base_bdevs_discovered": 2, 00:08:45.648 "num_base_bdevs_operational": 2, 00:08:45.648 "base_bdevs_list": [ 00:08:45.648 { 00:08:45.648 "name": "pt1", 00:08:45.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.648 "is_configured": true, 00:08:45.648 "data_offset": 2048, 00:08:45.648 "data_size": 63488 00:08:45.648 }, 00:08:45.648 { 00:08:45.648 "name": "pt2", 00:08:45.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.648 "is_configured": true, 00:08:45.648 "data_offset": 2048, 00:08:45.648 "data_size": 63488 00:08:45.648 } 00:08:45.648 ] 00:08:45.648 }' 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.648 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.907 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:45.907 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:45.907 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.907 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.907 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.907 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.907 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.907 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.907 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.907 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.907 [2024-11-29 07:40:35.736529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.907 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.907 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.907 "name": "raid_bdev1", 00:08:45.907 "aliases": [ 00:08:45.907 "e80390c7-9206-452f-b687-f11a35c0d667" 00:08:45.907 ], 00:08:45.907 "product_name": "Raid Volume", 00:08:45.907 "block_size": 512, 00:08:45.907 "num_blocks": 63488, 00:08:45.907 "uuid": "e80390c7-9206-452f-b687-f11a35c0d667", 00:08:45.907 "assigned_rate_limits": { 00:08:45.907 "rw_ios_per_sec": 0, 00:08:45.907 "rw_mbytes_per_sec": 0, 00:08:45.907 "r_mbytes_per_sec": 0, 00:08:45.907 "w_mbytes_per_sec": 0 00:08:45.907 }, 00:08:45.907 "claimed": false, 00:08:45.907 "zoned": false, 00:08:45.907 "supported_io_types": { 00:08:45.907 "read": true, 00:08:45.907 "write": true, 00:08:45.907 "unmap": false, 00:08:45.907 "flush": false, 00:08:45.907 "reset": true, 00:08:45.907 "nvme_admin": false, 00:08:45.907 "nvme_io": false, 00:08:45.907 "nvme_io_md": false, 00:08:45.907 "write_zeroes": true, 00:08:45.907 "zcopy": false, 00:08:45.907 "get_zone_info": false, 00:08:45.907 "zone_management": false, 00:08:45.907 "zone_append": false, 00:08:45.907 "compare": false, 00:08:45.907 "compare_and_write": false, 00:08:45.907 "abort": false, 00:08:45.907 "seek_hole": false, 00:08:45.907 "seek_data": false, 00:08:45.907 "copy": false, 00:08:45.907 "nvme_iov_md": false 00:08:45.907 }, 00:08:45.907 "memory_domains": [ 00:08:45.907 { 00:08:45.907 "dma_device_id": "system", 00:08:45.907 "dma_device_type": 1 00:08:45.907 }, 00:08:45.907 { 00:08:45.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.907 "dma_device_type": 2 00:08:45.907 }, 00:08:45.907 { 00:08:45.907 "dma_device_id": "system", 00:08:45.907 "dma_device_type": 1 00:08:45.907 }, 00:08:45.907 { 00:08:45.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.907 "dma_device_type": 2 00:08:45.907 } 00:08:45.907 ], 00:08:45.907 "driver_specific": { 00:08:45.907 "raid": { 00:08:45.907 "uuid": "e80390c7-9206-452f-b687-f11a35c0d667", 00:08:45.907 "strip_size_kb": 0, 00:08:45.907 "state": "online", 00:08:45.907 "raid_level": "raid1", 00:08:45.907 "superblock": true, 00:08:45.907 "num_base_bdevs": 2, 00:08:45.907 "num_base_bdevs_discovered": 2, 00:08:45.907 "num_base_bdevs_operational": 2, 00:08:45.907 "base_bdevs_list": [ 00:08:45.907 { 00:08:45.907 "name": "pt1", 00:08:45.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.907 "is_configured": true, 00:08:45.907 "data_offset": 2048, 00:08:45.907 "data_size": 63488 00:08:45.907 }, 00:08:45.907 { 00:08:45.908 "name": "pt2", 00:08:45.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.908 "is_configured": true, 00:08:45.908 "data_offset": 2048, 00:08:45.908 "data_size": 63488 00:08:45.908 } 00:08:45.908 ] 00:08:45.908 } 00:08:45.908 } 00:08:45.908 }' 00:08:45.908 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.908 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:45.908 pt2' 00:08:45.908 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.167 [2024-11-29 07:40:35.948170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e80390c7-9206-452f-b687-f11a35c0d667 '!=' e80390c7-9206-452f-b687-f11a35c0d667 ']' 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.167 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.167 [2024-11-29 07:40:35.995900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.167 "name": "raid_bdev1", 00:08:46.167 "uuid": "e80390c7-9206-452f-b687-f11a35c0d667", 00:08:46.167 "strip_size_kb": 0, 00:08:46.167 "state": "online", 00:08:46.167 "raid_level": "raid1", 00:08:46.167 "superblock": true, 00:08:46.167 "num_base_bdevs": 2, 00:08:46.167 "num_base_bdevs_discovered": 1, 00:08:46.167 "num_base_bdevs_operational": 1, 00:08:46.167 "base_bdevs_list": [ 00:08:46.167 { 00:08:46.167 "name": null, 00:08:46.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.167 "is_configured": false, 00:08:46.167 "data_offset": 0, 00:08:46.167 "data_size": 63488 00:08:46.167 }, 00:08:46.167 { 00:08:46.167 "name": "pt2", 00:08:46.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.167 "is_configured": true, 00:08:46.167 "data_offset": 2048, 00:08:46.167 "data_size": 63488 00:08:46.167 } 00:08:46.167 ] 00:08:46.167 }' 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.167 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.736 [2024-11-29 07:40:36.391213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.736 [2024-11-29 07:40:36.391241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.736 [2024-11-29 07:40:36.391305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.736 [2024-11-29 07:40:36.391347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.736 [2024-11-29 07:40:36.391357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.736 [2024-11-29 07:40:36.459080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:46.736 [2024-11-29 07:40:36.459134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.736 [2024-11-29 07:40:36.459149] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:46.736 [2024-11-29 07:40:36.459158] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.736 [2024-11-29 07:40:36.461263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.736 [2024-11-29 07:40:36.461298] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:46.736 [2024-11-29 07:40:36.461367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:46.736 [2024-11-29 07:40:36.461408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.736 [2024-11-29 07:40:36.461503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:46.736 [2024-11-29 07:40:36.461519] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.736 [2024-11-29 07:40:36.461736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:46.736 [2024-11-29 07:40:36.461896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:46.736 [2024-11-29 07:40:36.461909] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:46.736 [2024-11-29 07:40:36.462039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.736 pt2 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.736 "name": "raid_bdev1", 00:08:46.736 "uuid": "e80390c7-9206-452f-b687-f11a35c0d667", 00:08:46.736 "strip_size_kb": 0, 00:08:46.736 "state": "online", 00:08:46.736 "raid_level": "raid1", 00:08:46.736 "superblock": true, 00:08:46.736 "num_base_bdevs": 2, 00:08:46.736 "num_base_bdevs_discovered": 1, 00:08:46.736 "num_base_bdevs_operational": 1, 00:08:46.736 "base_bdevs_list": [ 00:08:46.736 { 00:08:46.736 "name": null, 00:08:46.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.736 "is_configured": false, 00:08:46.736 "data_offset": 2048, 00:08:46.736 "data_size": 63488 00:08:46.736 }, 00:08:46.736 { 00:08:46.736 "name": "pt2", 00:08:46.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.736 "is_configured": true, 00:08:46.736 "data_offset": 2048, 00:08:46.736 "data_size": 63488 00:08:46.736 } 00:08:46.736 ] 00:08:46.736 }' 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.736 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.996 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.996 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.996 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.996 [2024-11-29 07:40:36.886326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.996 [2024-11-29 07:40:36.886355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.996 [2024-11-29 07:40:36.886418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.996 [2024-11-29 07:40:36.886463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.996 [2024-11-29 07:40:36.886471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:46.996 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.996 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.996 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.996 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:46.996 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.996 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.255 [2024-11-29 07:40:36.946243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:47.255 [2024-11-29 07:40:36.946301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.255 [2024-11-29 07:40:36.946318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:47.255 [2024-11-29 07:40:36.946327] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.255 [2024-11-29 07:40:36.948445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.255 [2024-11-29 07:40:36.948477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:47.255 [2024-11-29 07:40:36.948549] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:47.255 [2024-11-29 07:40:36.948595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:47.255 [2024-11-29 07:40:36.948727] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:47.255 [2024-11-29 07:40:36.948742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.255 [2024-11-29 07:40:36.948757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:47.255 [2024-11-29 07:40:36.948809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.255 [2024-11-29 07:40:36.948876] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:47.255 [2024-11-29 07:40:36.948887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:47.255 [2024-11-29 07:40:36.949128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:47.255 [2024-11-29 07:40:36.949262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:47.255 [2024-11-29 07:40:36.949279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:47.255 [2024-11-29 07:40:36.949426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.255 pt1 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.255 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.255 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.255 "name": "raid_bdev1", 00:08:47.255 "uuid": "e80390c7-9206-452f-b687-f11a35c0d667", 00:08:47.255 "strip_size_kb": 0, 00:08:47.255 "state": "online", 00:08:47.255 "raid_level": "raid1", 00:08:47.255 "superblock": true, 00:08:47.255 "num_base_bdevs": 2, 00:08:47.255 "num_base_bdevs_discovered": 1, 00:08:47.255 "num_base_bdevs_operational": 1, 00:08:47.255 "base_bdevs_list": [ 00:08:47.255 { 00:08:47.255 "name": null, 00:08:47.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.255 "is_configured": false, 00:08:47.255 "data_offset": 2048, 00:08:47.255 "data_size": 63488 00:08:47.255 }, 00:08:47.255 { 00:08:47.255 "name": "pt2", 00:08:47.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.255 "is_configured": true, 00:08:47.255 "data_offset": 2048, 00:08:47.255 "data_size": 63488 00:08:47.255 } 00:08:47.255 ] 00:08:47.255 }' 00:08:47.255 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.255 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.514 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:47.514 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:47.514 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.514 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.514 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.514 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:47.514 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:47.514 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:47.514 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.514 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.514 [2024-11-29 07:40:37.449593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e80390c7-9206-452f-b687-f11a35c0d667 '!=' e80390c7-9206-452f-b687-f11a35c0d667 ']' 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63034 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63034 ']' 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63034 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63034 00:08:47.773 killing process with pid 63034 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63034' 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63034 00:08:47.773 [2024-11-29 07:40:37.510861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.773 [2024-11-29 07:40:37.510943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.773 [2024-11-29 07:40:37.510988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.773 [2024-11-29 07:40:37.511002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:47.773 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63034 00:08:47.773 [2024-11-29 07:40:37.705627] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.152 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:49.152 00:08:49.152 real 0m5.711s 00:08:49.152 user 0m8.626s 00:08:49.152 sys 0m0.976s 00:08:49.152 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.152 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.152 ************************************ 00:08:49.152 END TEST raid_superblock_test 00:08:49.152 ************************************ 00:08:49.152 07:40:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:49.152 07:40:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:49.152 07:40:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.152 07:40:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.152 ************************************ 00:08:49.152 START TEST raid_read_error_test 00:08:49.152 ************************************ 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZFmPEXMqnJ 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63359 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63359 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63359 ']' 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.152 07:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.152 [2024-11-29 07:40:38.953169] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:49.152 [2024-11-29 07:40:38.953296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63359 ] 00:08:49.412 [2024-11-29 07:40:39.126896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.412 [2024-11-29 07:40:39.239011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.672 [2024-11-29 07:40:39.428659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.672 [2024-11-29 07:40:39.428717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.930 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.931 BaseBdev1_malloc 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.931 true 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.931 [2024-11-29 07:40:39.827837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:49.931 [2024-11-29 07:40:39.827892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.931 [2024-11-29 07:40:39.827911] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:49.931 [2024-11-29 07:40:39.827921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.931 [2024-11-29 07:40:39.829963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.931 [2024-11-29 07:40:39.829999] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:49.931 BaseBdev1 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.931 BaseBdev2_malloc 00:08:49.931 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.189 true 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.189 [2024-11-29 07:40:39.893348] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:50.189 [2024-11-29 07:40:39.893400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.189 [2024-11-29 07:40:39.893416] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:50.189 [2024-11-29 07:40:39.893426] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.189 [2024-11-29 07:40:39.895489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.189 [2024-11-29 07:40:39.895526] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:50.189 BaseBdev2 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.189 [2024-11-29 07:40:39.905386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.189 [2024-11-29 07:40:39.907285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.189 [2024-11-29 07:40:39.907499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:50.189 [2024-11-29 07:40:39.907515] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:50.189 [2024-11-29 07:40:39.907750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:50.189 [2024-11-29 07:40:39.907928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:50.189 [2024-11-29 07:40:39.907948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:50.189 [2024-11-29 07:40:39.908115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.189 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.190 "name": "raid_bdev1", 00:08:50.190 "uuid": "52d8b0f3-9615-4bbc-8a3f-f304af7514e8", 00:08:50.190 "strip_size_kb": 0, 00:08:50.190 "state": "online", 00:08:50.190 "raid_level": "raid1", 00:08:50.190 "superblock": true, 00:08:50.190 "num_base_bdevs": 2, 00:08:50.190 "num_base_bdevs_discovered": 2, 00:08:50.190 "num_base_bdevs_operational": 2, 00:08:50.190 "base_bdevs_list": [ 00:08:50.190 { 00:08:50.190 "name": "BaseBdev1", 00:08:50.190 "uuid": "bc94c08c-34bd-5c27-840a-6510b884d340", 00:08:50.190 "is_configured": true, 00:08:50.190 "data_offset": 2048, 00:08:50.190 "data_size": 63488 00:08:50.190 }, 00:08:50.190 { 00:08:50.190 "name": "BaseBdev2", 00:08:50.190 "uuid": "dcce6ec0-aa1e-518d-8093-c9bf8872212c", 00:08:50.190 "is_configured": true, 00:08:50.190 "data_offset": 2048, 00:08:50.190 "data_size": 63488 00:08:50.190 } 00:08:50.190 ] 00:08:50.190 }' 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.190 07:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.448 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:50.448 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:50.707 [2024-11-29 07:40:40.441807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.670 "name": "raid_bdev1", 00:08:51.670 "uuid": "52d8b0f3-9615-4bbc-8a3f-f304af7514e8", 00:08:51.670 "strip_size_kb": 0, 00:08:51.670 "state": "online", 00:08:51.670 "raid_level": "raid1", 00:08:51.670 "superblock": true, 00:08:51.670 "num_base_bdevs": 2, 00:08:51.670 "num_base_bdevs_discovered": 2, 00:08:51.670 "num_base_bdevs_operational": 2, 00:08:51.670 "base_bdevs_list": [ 00:08:51.670 { 00:08:51.670 "name": "BaseBdev1", 00:08:51.670 "uuid": "bc94c08c-34bd-5c27-840a-6510b884d340", 00:08:51.670 "is_configured": true, 00:08:51.670 "data_offset": 2048, 00:08:51.670 "data_size": 63488 00:08:51.670 }, 00:08:51.670 { 00:08:51.670 "name": "BaseBdev2", 00:08:51.670 "uuid": "dcce6ec0-aa1e-518d-8093-c9bf8872212c", 00:08:51.670 "is_configured": true, 00:08:51.670 "data_offset": 2048, 00:08:51.670 "data_size": 63488 00:08:51.670 } 00:08:51.670 ] 00:08:51.670 }' 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.670 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.929 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.929 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.929 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.929 [2024-11-29 07:40:41.785238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.929 [2024-11-29 07:40:41.785276] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.929 [2024-11-29 07:40:41.787953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.929 [2024-11-29 07:40:41.788002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.929 [2024-11-29 07:40:41.788082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.929 [2024-11-29 07:40:41.788094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:51.929 { 00:08:51.929 "results": [ 00:08:51.929 { 00:08:51.929 "job": "raid_bdev1", 00:08:51.929 "core_mask": "0x1", 00:08:51.929 "workload": "randrw", 00:08:51.929 "percentage": 50, 00:08:51.929 "status": "finished", 00:08:51.929 "queue_depth": 1, 00:08:51.929 "io_size": 131072, 00:08:51.929 "runtime": 1.344397, 00:08:51.929 "iops": 18454.370249264168, 00:08:51.929 "mibps": 2306.796281158021, 00:08:51.929 "io_failed": 0, 00:08:51.929 "io_timeout": 0, 00:08:51.929 "avg_latency_us": 51.623499046904946, 00:08:51.929 "min_latency_us": 22.246288209606988, 00:08:51.929 "max_latency_us": 1452.380786026201 00:08:51.929 } 00:08:51.929 ], 00:08:51.929 "core_count": 1 00:08:51.929 } 00:08:51.929 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.929 07:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63359 00:08:51.929 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63359 ']' 00:08:51.929 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63359 00:08:51.929 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:51.929 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.930 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63359 00:08:51.930 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.930 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.930 killing process with pid 63359 00:08:51.930 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63359' 00:08:51.930 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63359 00:08:51.930 [2024-11-29 07:40:41.833345] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.930 07:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63359 00:08:52.188 [2024-11-29 07:40:41.967590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.564 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZFmPEXMqnJ 00:08:53.564 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:53.564 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:53.564 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:53.564 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:53.564 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.564 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:53.564 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:53.564 00:08:53.564 real 0m4.256s 00:08:53.564 user 0m5.088s 00:08:53.564 sys 0m0.533s 00:08:53.564 07:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.564 07:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.564 ************************************ 00:08:53.564 END TEST raid_read_error_test 00:08:53.564 ************************************ 00:08:53.564 07:40:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:53.564 07:40:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:53.564 07:40:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.564 07:40:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.564 ************************************ 00:08:53.564 START TEST raid_write_error_test 00:08:53.564 ************************************ 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.veaIRkQep3 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63499 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63499 00:08:53.564 07:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:53.565 07:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63499 ']' 00:08:53.565 07:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.565 07:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.565 07:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.565 07:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.565 07:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.565 [2024-11-29 07:40:43.275683] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:53.565 [2024-11-29 07:40:43.275814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63499 ] 00:08:53.565 [2024-11-29 07:40:43.447476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.824 [2024-11-29 07:40:43.556595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.824 [2024-11-29 07:40:43.742641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.824 [2024-11-29 07:40:43.742684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.392 BaseBdev1_malloc 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.392 true 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.392 [2024-11-29 07:40:44.157075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:54.392 [2024-11-29 07:40:44.157151] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.392 [2024-11-29 07:40:44.157170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:54.392 [2024-11-29 07:40:44.157180] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.392 [2024-11-29 07:40:44.159180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.392 [2024-11-29 07:40:44.159215] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:54.392 BaseBdev1 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.392 BaseBdev2_malloc 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.392 true 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.392 [2024-11-29 07:40:44.221323] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:54.392 [2024-11-29 07:40:44.221385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.392 [2024-11-29 07:40:44.221400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:54.392 [2024-11-29 07:40:44.221410] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.392 [2024-11-29 07:40:44.223436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.392 [2024-11-29 07:40:44.223469] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:54.392 BaseBdev2 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.392 [2024-11-29 07:40:44.233351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.392 [2024-11-29 07:40:44.235100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.392 [2024-11-29 07:40:44.235299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:54.392 [2024-11-29 07:40:44.235313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:54.392 [2024-11-29 07:40:44.235558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:54.392 [2024-11-29 07:40:44.235741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:54.392 [2024-11-29 07:40:44.235760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:54.392 [2024-11-29 07:40:44.235917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.392 "name": "raid_bdev1", 00:08:54.392 "uuid": "8b8abbff-4a84-4611-854a-f8777f8cf079", 00:08:54.392 "strip_size_kb": 0, 00:08:54.392 "state": "online", 00:08:54.392 "raid_level": "raid1", 00:08:54.392 "superblock": true, 00:08:54.392 "num_base_bdevs": 2, 00:08:54.392 "num_base_bdevs_discovered": 2, 00:08:54.392 "num_base_bdevs_operational": 2, 00:08:54.392 "base_bdevs_list": [ 00:08:54.392 { 00:08:54.392 "name": "BaseBdev1", 00:08:54.392 "uuid": "9a95af09-6765-58f6-92a5-5418044f19ce", 00:08:54.392 "is_configured": true, 00:08:54.392 "data_offset": 2048, 00:08:54.392 "data_size": 63488 00:08:54.392 }, 00:08:54.392 { 00:08:54.392 "name": "BaseBdev2", 00:08:54.392 "uuid": "a65b8d18-ca1d-5111-9ed1-0a987eb4b773", 00:08:54.392 "is_configured": true, 00:08:54.392 "data_offset": 2048, 00:08:54.392 "data_size": 63488 00:08:54.392 } 00:08:54.392 ] 00:08:54.392 }' 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.392 07:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.960 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:54.960 07:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:54.960 [2024-11-29 07:40:44.797592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.896 [2024-11-29 07:40:45.713586] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:55.896 [2024-11-29 07:40:45.713661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.896 [2024-11-29 07:40:45.713856] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.896 "name": "raid_bdev1", 00:08:55.896 "uuid": "8b8abbff-4a84-4611-854a-f8777f8cf079", 00:08:55.896 "strip_size_kb": 0, 00:08:55.896 "state": "online", 00:08:55.896 "raid_level": "raid1", 00:08:55.896 "superblock": true, 00:08:55.896 "num_base_bdevs": 2, 00:08:55.896 "num_base_bdevs_discovered": 1, 00:08:55.896 "num_base_bdevs_operational": 1, 00:08:55.896 "base_bdevs_list": [ 00:08:55.896 { 00:08:55.896 "name": null, 00:08:55.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.896 "is_configured": false, 00:08:55.896 "data_offset": 0, 00:08:55.896 "data_size": 63488 00:08:55.896 }, 00:08:55.896 { 00:08:55.896 "name": "BaseBdev2", 00:08:55.896 "uuid": "a65b8d18-ca1d-5111-9ed1-0a987eb4b773", 00:08:55.896 "is_configured": true, 00:08:55.896 "data_offset": 2048, 00:08:55.896 "data_size": 63488 00:08:55.896 } 00:08:55.896 ] 00:08:55.896 }' 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.896 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.464 [2024-11-29 07:40:46.146346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.464 [2024-11-29 07:40:46.146381] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.464 [2024-11-29 07:40:46.148995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.464 [2024-11-29 07:40:46.149039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.464 [2024-11-29 07:40:46.149105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.464 [2024-11-29 07:40:46.149118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:56.464 { 00:08:56.464 "results": [ 00:08:56.464 { 00:08:56.464 "job": "raid_bdev1", 00:08:56.464 "core_mask": "0x1", 00:08:56.464 "workload": "randrw", 00:08:56.464 "percentage": 50, 00:08:56.464 "status": "finished", 00:08:56.464 "queue_depth": 1, 00:08:56.464 "io_size": 131072, 00:08:56.464 "runtime": 1.349677, 00:08:56.464 "iops": 21695.5612342805, 00:08:56.464 "mibps": 2711.9451542850625, 00:08:56.464 "io_failed": 0, 00:08:56.464 "io_timeout": 0, 00:08:56.464 "avg_latency_us": 43.52436326890837, 00:08:56.464 "min_latency_us": 21.575545851528386, 00:08:56.464 "max_latency_us": 1345.0620087336245 00:08:56.464 } 00:08:56.464 ], 00:08:56.464 "core_count": 1 00:08:56.464 } 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63499 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63499 ']' 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63499 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63499 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.464 killing process with pid 63499 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63499' 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63499 00:08:56.464 [2024-11-29 07:40:46.181084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.464 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63499 00:08:56.464 [2024-11-29 07:40:46.310700] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.843 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.veaIRkQep3 00:08:57.843 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:57.843 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:57.843 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:57.843 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:57.843 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.843 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:57.843 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:57.843 00:08:57.843 real 0m4.273s 00:08:57.843 user 0m5.143s 00:08:57.843 sys 0m0.497s 00:08:57.843 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.843 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.843 ************************************ 00:08:57.843 END TEST raid_write_error_test 00:08:57.843 ************************************ 00:08:57.843 07:40:47 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:57.843 07:40:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:57.843 07:40:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:57.843 07:40:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:57.843 07:40:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.843 07:40:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.843 ************************************ 00:08:57.843 START TEST raid_state_function_test 00:08:57.843 ************************************ 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63637 00:08:57.843 Process raid pid: 63637 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63637' 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63637 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63637 ']' 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.843 07:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.843 [2024-11-29 07:40:47.616262] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:57.843 [2024-11-29 07:40:47.616375] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.843 [2024-11-29 07:40:47.776762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.103 [2024-11-29 07:40:47.890273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.361 [2024-11-29 07:40:48.087969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.361 [2024-11-29 07:40:48.088011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.620 [2024-11-29 07:40:48.433251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.620 [2024-11-29 07:40:48.433309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.620 [2024-11-29 07:40:48.433319] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.620 [2024-11-29 07:40:48.433329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.620 [2024-11-29 07:40:48.433335] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.620 [2024-11-29 07:40:48.433344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.620 "name": "Existed_Raid", 00:08:58.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.620 "strip_size_kb": 64, 00:08:58.620 "state": "configuring", 00:08:58.620 "raid_level": "raid0", 00:08:58.620 "superblock": false, 00:08:58.620 "num_base_bdevs": 3, 00:08:58.620 "num_base_bdevs_discovered": 0, 00:08:58.620 "num_base_bdevs_operational": 3, 00:08:58.620 "base_bdevs_list": [ 00:08:58.620 { 00:08:58.620 "name": "BaseBdev1", 00:08:58.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.620 "is_configured": false, 00:08:58.620 "data_offset": 0, 00:08:58.620 "data_size": 0 00:08:58.620 }, 00:08:58.620 { 00:08:58.620 "name": "BaseBdev2", 00:08:58.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.620 "is_configured": false, 00:08:58.620 "data_offset": 0, 00:08:58.620 "data_size": 0 00:08:58.620 }, 00:08:58.620 { 00:08:58.620 "name": "BaseBdev3", 00:08:58.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.620 "is_configured": false, 00:08:58.620 "data_offset": 0, 00:08:58.620 "data_size": 0 00:08:58.620 } 00:08:58.620 ] 00:08:58.620 }' 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.620 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.195 [2024-11-29 07:40:48.852443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.195 [2024-11-29 07:40:48.852480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.195 [2024-11-29 07:40:48.860443] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.195 [2024-11-29 07:40:48.860492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.195 [2024-11-29 07:40:48.860519] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.195 [2024-11-29 07:40:48.860530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.195 [2024-11-29 07:40:48.860537] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.195 [2024-11-29 07:40:48.860546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.195 [2024-11-29 07:40:48.904715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.195 BaseBdev1 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.195 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.196 [ 00:08:59.196 { 00:08:59.196 "name": "BaseBdev1", 00:08:59.196 "aliases": [ 00:08:59.196 "bbcc5f44-3bc4-4169-8973-39daef40cf2b" 00:08:59.196 ], 00:08:59.196 "product_name": "Malloc disk", 00:08:59.196 "block_size": 512, 00:08:59.196 "num_blocks": 65536, 00:08:59.196 "uuid": "bbcc5f44-3bc4-4169-8973-39daef40cf2b", 00:08:59.196 "assigned_rate_limits": { 00:08:59.196 "rw_ios_per_sec": 0, 00:08:59.196 "rw_mbytes_per_sec": 0, 00:08:59.196 "r_mbytes_per_sec": 0, 00:08:59.196 "w_mbytes_per_sec": 0 00:08:59.196 }, 00:08:59.196 "claimed": true, 00:08:59.196 "claim_type": "exclusive_write", 00:08:59.196 "zoned": false, 00:08:59.196 "supported_io_types": { 00:08:59.196 "read": true, 00:08:59.196 "write": true, 00:08:59.196 "unmap": true, 00:08:59.196 "flush": true, 00:08:59.196 "reset": true, 00:08:59.196 "nvme_admin": false, 00:08:59.196 "nvme_io": false, 00:08:59.196 "nvme_io_md": false, 00:08:59.196 "write_zeroes": true, 00:08:59.196 "zcopy": true, 00:08:59.196 "get_zone_info": false, 00:08:59.196 "zone_management": false, 00:08:59.196 "zone_append": false, 00:08:59.196 "compare": false, 00:08:59.196 "compare_and_write": false, 00:08:59.196 "abort": true, 00:08:59.196 "seek_hole": false, 00:08:59.196 "seek_data": false, 00:08:59.196 "copy": true, 00:08:59.196 "nvme_iov_md": false 00:08:59.196 }, 00:08:59.196 "memory_domains": [ 00:08:59.196 { 00:08:59.196 "dma_device_id": "system", 00:08:59.196 "dma_device_type": 1 00:08:59.196 }, 00:08:59.196 { 00:08:59.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.196 "dma_device_type": 2 00:08:59.196 } 00:08:59.196 ], 00:08:59.196 "driver_specific": {} 00:08:59.196 } 00:08:59.196 ] 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.196 "name": "Existed_Raid", 00:08:59.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.196 "strip_size_kb": 64, 00:08:59.196 "state": "configuring", 00:08:59.196 "raid_level": "raid0", 00:08:59.196 "superblock": false, 00:08:59.196 "num_base_bdevs": 3, 00:08:59.196 "num_base_bdevs_discovered": 1, 00:08:59.196 "num_base_bdevs_operational": 3, 00:08:59.196 "base_bdevs_list": [ 00:08:59.196 { 00:08:59.196 "name": "BaseBdev1", 00:08:59.196 "uuid": "bbcc5f44-3bc4-4169-8973-39daef40cf2b", 00:08:59.196 "is_configured": true, 00:08:59.196 "data_offset": 0, 00:08:59.196 "data_size": 65536 00:08:59.196 }, 00:08:59.196 { 00:08:59.196 "name": "BaseBdev2", 00:08:59.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.196 "is_configured": false, 00:08:59.196 "data_offset": 0, 00:08:59.196 "data_size": 0 00:08:59.196 }, 00:08:59.196 { 00:08:59.196 "name": "BaseBdev3", 00:08:59.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.196 "is_configured": false, 00:08:59.196 "data_offset": 0, 00:08:59.196 "data_size": 0 00:08:59.196 } 00:08:59.196 ] 00:08:59.196 }' 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.196 07:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.474 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.474 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.474 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.474 [2024-11-29 07:40:49.371944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.474 [2024-11-29 07:40:49.371995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:59.474 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.474 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.474 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.474 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.474 [2024-11-29 07:40:49.383962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.474 [2024-11-29 07:40:49.385731] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.474 [2024-11-29 07:40:49.385771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.474 [2024-11-29 07:40:49.385796] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.474 [2024-11-29 07:40:49.385804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.474 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.474 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:59.474 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.474 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.474 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.475 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.752 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.752 "name": "Existed_Raid", 00:08:59.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.752 "strip_size_kb": 64, 00:08:59.752 "state": "configuring", 00:08:59.752 "raid_level": "raid0", 00:08:59.752 "superblock": false, 00:08:59.752 "num_base_bdevs": 3, 00:08:59.752 "num_base_bdevs_discovered": 1, 00:08:59.752 "num_base_bdevs_operational": 3, 00:08:59.752 "base_bdevs_list": [ 00:08:59.752 { 00:08:59.752 "name": "BaseBdev1", 00:08:59.752 "uuid": "bbcc5f44-3bc4-4169-8973-39daef40cf2b", 00:08:59.752 "is_configured": true, 00:08:59.752 "data_offset": 0, 00:08:59.752 "data_size": 65536 00:08:59.752 }, 00:08:59.752 { 00:08:59.752 "name": "BaseBdev2", 00:08:59.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.752 "is_configured": false, 00:08:59.752 "data_offset": 0, 00:08:59.752 "data_size": 0 00:08:59.752 }, 00:08:59.752 { 00:08:59.752 "name": "BaseBdev3", 00:08:59.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.752 "is_configured": false, 00:08:59.752 "data_offset": 0, 00:08:59.752 "data_size": 0 00:08:59.752 } 00:08:59.752 ] 00:08:59.752 }' 00:08:59.752 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.752 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.011 [2024-11-29 07:40:49.838807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.011 BaseBdev2 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.011 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.011 [ 00:09:00.011 { 00:09:00.011 "name": "BaseBdev2", 00:09:00.011 "aliases": [ 00:09:00.011 "02bdce1b-b736-4b16-bcd7-68ca9465f6fc" 00:09:00.011 ], 00:09:00.011 "product_name": "Malloc disk", 00:09:00.011 "block_size": 512, 00:09:00.011 "num_blocks": 65536, 00:09:00.011 "uuid": "02bdce1b-b736-4b16-bcd7-68ca9465f6fc", 00:09:00.011 "assigned_rate_limits": { 00:09:00.011 "rw_ios_per_sec": 0, 00:09:00.011 "rw_mbytes_per_sec": 0, 00:09:00.011 "r_mbytes_per_sec": 0, 00:09:00.011 "w_mbytes_per_sec": 0 00:09:00.011 }, 00:09:00.011 "claimed": true, 00:09:00.011 "claim_type": "exclusive_write", 00:09:00.011 "zoned": false, 00:09:00.011 "supported_io_types": { 00:09:00.011 "read": true, 00:09:00.011 "write": true, 00:09:00.011 "unmap": true, 00:09:00.011 "flush": true, 00:09:00.011 "reset": true, 00:09:00.011 "nvme_admin": false, 00:09:00.011 "nvme_io": false, 00:09:00.011 "nvme_io_md": false, 00:09:00.011 "write_zeroes": true, 00:09:00.011 "zcopy": true, 00:09:00.011 "get_zone_info": false, 00:09:00.011 "zone_management": false, 00:09:00.011 "zone_append": false, 00:09:00.011 "compare": false, 00:09:00.012 "compare_and_write": false, 00:09:00.012 "abort": true, 00:09:00.012 "seek_hole": false, 00:09:00.012 "seek_data": false, 00:09:00.012 "copy": true, 00:09:00.012 "nvme_iov_md": false 00:09:00.012 }, 00:09:00.012 "memory_domains": [ 00:09:00.012 { 00:09:00.012 "dma_device_id": "system", 00:09:00.012 "dma_device_type": 1 00:09:00.012 }, 00:09:00.012 { 00:09:00.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.012 "dma_device_type": 2 00:09:00.012 } 00:09:00.012 ], 00:09:00.012 "driver_specific": {} 00:09:00.012 } 00:09:00.012 ] 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.012 "name": "Existed_Raid", 00:09:00.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.012 "strip_size_kb": 64, 00:09:00.012 "state": "configuring", 00:09:00.012 "raid_level": "raid0", 00:09:00.012 "superblock": false, 00:09:00.012 "num_base_bdevs": 3, 00:09:00.012 "num_base_bdevs_discovered": 2, 00:09:00.012 "num_base_bdevs_operational": 3, 00:09:00.012 "base_bdevs_list": [ 00:09:00.012 { 00:09:00.012 "name": "BaseBdev1", 00:09:00.012 "uuid": "bbcc5f44-3bc4-4169-8973-39daef40cf2b", 00:09:00.012 "is_configured": true, 00:09:00.012 "data_offset": 0, 00:09:00.012 "data_size": 65536 00:09:00.012 }, 00:09:00.012 { 00:09:00.012 "name": "BaseBdev2", 00:09:00.012 "uuid": "02bdce1b-b736-4b16-bcd7-68ca9465f6fc", 00:09:00.012 "is_configured": true, 00:09:00.012 "data_offset": 0, 00:09:00.012 "data_size": 65536 00:09:00.012 }, 00:09:00.012 { 00:09:00.012 "name": "BaseBdev3", 00:09:00.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.012 "is_configured": false, 00:09:00.012 "data_offset": 0, 00:09:00.012 "data_size": 0 00:09:00.012 } 00:09:00.012 ] 00:09:00.012 }' 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.012 07:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.580 [2024-11-29 07:40:50.339642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.580 [2024-11-29 07:40:50.339689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:00.580 [2024-11-29 07:40:50.339703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:00.580 [2024-11-29 07:40:50.339981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:00.580 [2024-11-29 07:40:50.340206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:00.580 [2024-11-29 07:40:50.340224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:00.580 [2024-11-29 07:40:50.340472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.580 BaseBdev3 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.580 [ 00:09:00.580 { 00:09:00.580 "name": "BaseBdev3", 00:09:00.580 "aliases": [ 00:09:00.580 "5547d2e8-b526-4c66-8999-1512c2f28485" 00:09:00.580 ], 00:09:00.580 "product_name": "Malloc disk", 00:09:00.580 "block_size": 512, 00:09:00.580 "num_blocks": 65536, 00:09:00.580 "uuid": "5547d2e8-b526-4c66-8999-1512c2f28485", 00:09:00.580 "assigned_rate_limits": { 00:09:00.580 "rw_ios_per_sec": 0, 00:09:00.580 "rw_mbytes_per_sec": 0, 00:09:00.580 "r_mbytes_per_sec": 0, 00:09:00.580 "w_mbytes_per_sec": 0 00:09:00.580 }, 00:09:00.580 "claimed": true, 00:09:00.580 "claim_type": "exclusive_write", 00:09:00.580 "zoned": false, 00:09:00.580 "supported_io_types": { 00:09:00.580 "read": true, 00:09:00.580 "write": true, 00:09:00.580 "unmap": true, 00:09:00.580 "flush": true, 00:09:00.580 "reset": true, 00:09:00.580 "nvme_admin": false, 00:09:00.580 "nvme_io": false, 00:09:00.580 "nvme_io_md": false, 00:09:00.580 "write_zeroes": true, 00:09:00.580 "zcopy": true, 00:09:00.580 "get_zone_info": false, 00:09:00.580 "zone_management": false, 00:09:00.580 "zone_append": false, 00:09:00.580 "compare": false, 00:09:00.580 "compare_and_write": false, 00:09:00.580 "abort": true, 00:09:00.580 "seek_hole": false, 00:09:00.580 "seek_data": false, 00:09:00.580 "copy": true, 00:09:00.580 "nvme_iov_md": false 00:09:00.580 }, 00:09:00.580 "memory_domains": [ 00:09:00.580 { 00:09:00.580 "dma_device_id": "system", 00:09:00.580 "dma_device_type": 1 00:09:00.580 }, 00:09:00.580 { 00:09:00.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.580 "dma_device_type": 2 00:09:00.580 } 00:09:00.580 ], 00:09:00.580 "driver_specific": {} 00:09:00.580 } 00:09:00.580 ] 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.580 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.580 "name": "Existed_Raid", 00:09:00.580 "uuid": "736540c5-96dc-4ee3-9f13-ac5bf02f0c01", 00:09:00.580 "strip_size_kb": 64, 00:09:00.580 "state": "online", 00:09:00.581 "raid_level": "raid0", 00:09:00.581 "superblock": false, 00:09:00.581 "num_base_bdevs": 3, 00:09:00.581 "num_base_bdevs_discovered": 3, 00:09:00.581 "num_base_bdevs_operational": 3, 00:09:00.581 "base_bdevs_list": [ 00:09:00.581 { 00:09:00.581 "name": "BaseBdev1", 00:09:00.581 "uuid": "bbcc5f44-3bc4-4169-8973-39daef40cf2b", 00:09:00.581 "is_configured": true, 00:09:00.581 "data_offset": 0, 00:09:00.581 "data_size": 65536 00:09:00.581 }, 00:09:00.581 { 00:09:00.581 "name": "BaseBdev2", 00:09:00.581 "uuid": "02bdce1b-b736-4b16-bcd7-68ca9465f6fc", 00:09:00.581 "is_configured": true, 00:09:00.581 "data_offset": 0, 00:09:00.581 "data_size": 65536 00:09:00.581 }, 00:09:00.581 { 00:09:00.581 "name": "BaseBdev3", 00:09:00.581 "uuid": "5547d2e8-b526-4c66-8999-1512c2f28485", 00:09:00.581 "is_configured": true, 00:09:00.581 "data_offset": 0, 00:09:00.581 "data_size": 65536 00:09:00.581 } 00:09:00.581 ] 00:09:00.581 }' 00:09:00.581 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.581 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.148 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:01.148 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:01.148 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.148 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.148 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.148 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.148 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:01.148 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.148 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.148 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.148 [2024-11-29 07:40:50.803205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.148 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.148 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.148 "name": "Existed_Raid", 00:09:01.148 "aliases": [ 00:09:01.148 "736540c5-96dc-4ee3-9f13-ac5bf02f0c01" 00:09:01.148 ], 00:09:01.148 "product_name": "Raid Volume", 00:09:01.148 "block_size": 512, 00:09:01.148 "num_blocks": 196608, 00:09:01.148 "uuid": "736540c5-96dc-4ee3-9f13-ac5bf02f0c01", 00:09:01.148 "assigned_rate_limits": { 00:09:01.148 "rw_ios_per_sec": 0, 00:09:01.148 "rw_mbytes_per_sec": 0, 00:09:01.148 "r_mbytes_per_sec": 0, 00:09:01.148 "w_mbytes_per_sec": 0 00:09:01.148 }, 00:09:01.148 "claimed": false, 00:09:01.148 "zoned": false, 00:09:01.148 "supported_io_types": { 00:09:01.148 "read": true, 00:09:01.148 "write": true, 00:09:01.148 "unmap": true, 00:09:01.148 "flush": true, 00:09:01.148 "reset": true, 00:09:01.148 "nvme_admin": false, 00:09:01.148 "nvme_io": false, 00:09:01.148 "nvme_io_md": false, 00:09:01.148 "write_zeroes": true, 00:09:01.148 "zcopy": false, 00:09:01.148 "get_zone_info": false, 00:09:01.148 "zone_management": false, 00:09:01.148 "zone_append": false, 00:09:01.148 "compare": false, 00:09:01.148 "compare_and_write": false, 00:09:01.148 "abort": false, 00:09:01.148 "seek_hole": false, 00:09:01.148 "seek_data": false, 00:09:01.148 "copy": false, 00:09:01.148 "nvme_iov_md": false 00:09:01.148 }, 00:09:01.148 "memory_domains": [ 00:09:01.148 { 00:09:01.148 "dma_device_id": "system", 00:09:01.148 "dma_device_type": 1 00:09:01.148 }, 00:09:01.148 { 00:09:01.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.148 "dma_device_type": 2 00:09:01.148 }, 00:09:01.148 { 00:09:01.148 "dma_device_id": "system", 00:09:01.148 "dma_device_type": 1 00:09:01.148 }, 00:09:01.148 { 00:09:01.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.148 "dma_device_type": 2 00:09:01.148 }, 00:09:01.148 { 00:09:01.148 "dma_device_id": "system", 00:09:01.148 "dma_device_type": 1 00:09:01.148 }, 00:09:01.148 { 00:09:01.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.148 "dma_device_type": 2 00:09:01.148 } 00:09:01.148 ], 00:09:01.149 "driver_specific": { 00:09:01.149 "raid": { 00:09:01.149 "uuid": "736540c5-96dc-4ee3-9f13-ac5bf02f0c01", 00:09:01.149 "strip_size_kb": 64, 00:09:01.149 "state": "online", 00:09:01.149 "raid_level": "raid0", 00:09:01.149 "superblock": false, 00:09:01.149 "num_base_bdevs": 3, 00:09:01.149 "num_base_bdevs_discovered": 3, 00:09:01.149 "num_base_bdevs_operational": 3, 00:09:01.149 "base_bdevs_list": [ 00:09:01.149 { 00:09:01.149 "name": "BaseBdev1", 00:09:01.149 "uuid": "bbcc5f44-3bc4-4169-8973-39daef40cf2b", 00:09:01.149 "is_configured": true, 00:09:01.149 "data_offset": 0, 00:09:01.149 "data_size": 65536 00:09:01.149 }, 00:09:01.149 { 00:09:01.149 "name": "BaseBdev2", 00:09:01.149 "uuid": "02bdce1b-b736-4b16-bcd7-68ca9465f6fc", 00:09:01.149 "is_configured": true, 00:09:01.149 "data_offset": 0, 00:09:01.149 "data_size": 65536 00:09:01.149 }, 00:09:01.149 { 00:09:01.149 "name": "BaseBdev3", 00:09:01.149 "uuid": "5547d2e8-b526-4c66-8999-1512c2f28485", 00:09:01.149 "is_configured": true, 00:09:01.149 "data_offset": 0, 00:09:01.149 "data_size": 65536 00:09:01.149 } 00:09:01.149 ] 00:09:01.149 } 00:09:01.149 } 00:09:01.149 }' 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:01.149 BaseBdev2 00:09:01.149 BaseBdev3' 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.149 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.149 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.149 [2024-11-29 07:40:51.078476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.149 [2024-11-29 07:40:51.078503] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.149 [2024-11-29 07:40:51.078551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.408 "name": "Existed_Raid", 00:09:01.408 "uuid": "736540c5-96dc-4ee3-9f13-ac5bf02f0c01", 00:09:01.408 "strip_size_kb": 64, 00:09:01.408 "state": "offline", 00:09:01.408 "raid_level": "raid0", 00:09:01.408 "superblock": false, 00:09:01.408 "num_base_bdevs": 3, 00:09:01.408 "num_base_bdevs_discovered": 2, 00:09:01.408 "num_base_bdevs_operational": 2, 00:09:01.408 "base_bdevs_list": [ 00:09:01.408 { 00:09:01.408 "name": null, 00:09:01.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.408 "is_configured": false, 00:09:01.408 "data_offset": 0, 00:09:01.408 "data_size": 65536 00:09:01.408 }, 00:09:01.408 { 00:09:01.408 "name": "BaseBdev2", 00:09:01.408 "uuid": "02bdce1b-b736-4b16-bcd7-68ca9465f6fc", 00:09:01.408 "is_configured": true, 00:09:01.408 "data_offset": 0, 00:09:01.408 "data_size": 65536 00:09:01.408 }, 00:09:01.408 { 00:09:01.408 "name": "BaseBdev3", 00:09:01.408 "uuid": "5547d2e8-b526-4c66-8999-1512c2f28485", 00:09:01.408 "is_configured": true, 00:09:01.408 "data_offset": 0, 00:09:01.408 "data_size": 65536 00:09:01.408 } 00:09:01.408 ] 00:09:01.408 }' 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.408 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.667 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:01.667 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.667 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.667 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.667 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.667 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.667 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.926 [2024-11-29 07:40:51.648733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.926 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.926 [2024-11-29 07:40:51.781815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:01.926 [2024-11-29 07:40:51.781869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:02.185 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.185 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.185 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.185 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:02.185 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.185 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.185 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.185 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.185 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:02.185 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 BaseBdev2 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 [ 00:09:02.186 { 00:09:02.186 "name": "BaseBdev2", 00:09:02.186 "aliases": [ 00:09:02.186 "a6c86a40-564a-4769-8bbb-f0212ef68503" 00:09:02.186 ], 00:09:02.186 "product_name": "Malloc disk", 00:09:02.186 "block_size": 512, 00:09:02.186 "num_blocks": 65536, 00:09:02.186 "uuid": "a6c86a40-564a-4769-8bbb-f0212ef68503", 00:09:02.186 "assigned_rate_limits": { 00:09:02.186 "rw_ios_per_sec": 0, 00:09:02.186 "rw_mbytes_per_sec": 0, 00:09:02.186 "r_mbytes_per_sec": 0, 00:09:02.186 "w_mbytes_per_sec": 0 00:09:02.186 }, 00:09:02.186 "claimed": false, 00:09:02.186 "zoned": false, 00:09:02.186 "supported_io_types": { 00:09:02.186 "read": true, 00:09:02.186 "write": true, 00:09:02.186 "unmap": true, 00:09:02.186 "flush": true, 00:09:02.186 "reset": true, 00:09:02.186 "nvme_admin": false, 00:09:02.186 "nvme_io": false, 00:09:02.186 "nvme_io_md": false, 00:09:02.186 "write_zeroes": true, 00:09:02.186 "zcopy": true, 00:09:02.186 "get_zone_info": false, 00:09:02.186 "zone_management": false, 00:09:02.186 "zone_append": false, 00:09:02.186 "compare": false, 00:09:02.186 "compare_and_write": false, 00:09:02.186 "abort": true, 00:09:02.186 "seek_hole": false, 00:09:02.186 "seek_data": false, 00:09:02.186 "copy": true, 00:09:02.186 "nvme_iov_md": false 00:09:02.186 }, 00:09:02.186 "memory_domains": [ 00:09:02.186 { 00:09:02.186 "dma_device_id": "system", 00:09:02.186 "dma_device_type": 1 00:09:02.186 }, 00:09:02.186 { 00:09:02.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.186 "dma_device_type": 2 00:09:02.186 } 00:09:02.186 ], 00:09:02.186 "driver_specific": {} 00:09:02.186 } 00:09:02.186 ] 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.186 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 BaseBdev3 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 [ 00:09:02.186 { 00:09:02.186 "name": "BaseBdev3", 00:09:02.186 "aliases": [ 00:09:02.186 "128bc1ec-f9f9-4608-8a91-6721e7648043" 00:09:02.186 ], 00:09:02.186 "product_name": "Malloc disk", 00:09:02.186 "block_size": 512, 00:09:02.186 "num_blocks": 65536, 00:09:02.186 "uuid": "128bc1ec-f9f9-4608-8a91-6721e7648043", 00:09:02.186 "assigned_rate_limits": { 00:09:02.186 "rw_ios_per_sec": 0, 00:09:02.186 "rw_mbytes_per_sec": 0, 00:09:02.186 "r_mbytes_per_sec": 0, 00:09:02.186 "w_mbytes_per_sec": 0 00:09:02.186 }, 00:09:02.186 "claimed": false, 00:09:02.186 "zoned": false, 00:09:02.186 "supported_io_types": { 00:09:02.186 "read": true, 00:09:02.186 "write": true, 00:09:02.186 "unmap": true, 00:09:02.186 "flush": true, 00:09:02.186 "reset": true, 00:09:02.186 "nvme_admin": false, 00:09:02.186 "nvme_io": false, 00:09:02.186 "nvme_io_md": false, 00:09:02.186 "write_zeroes": true, 00:09:02.186 "zcopy": true, 00:09:02.186 "get_zone_info": false, 00:09:02.186 "zone_management": false, 00:09:02.186 "zone_append": false, 00:09:02.186 "compare": false, 00:09:02.186 "compare_and_write": false, 00:09:02.186 "abort": true, 00:09:02.186 "seek_hole": false, 00:09:02.186 "seek_data": false, 00:09:02.186 "copy": true, 00:09:02.186 "nvme_iov_md": false 00:09:02.186 }, 00:09:02.186 "memory_domains": [ 00:09:02.186 { 00:09:02.186 "dma_device_id": "system", 00:09:02.186 "dma_device_type": 1 00:09:02.186 }, 00:09:02.186 { 00:09:02.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.186 "dma_device_type": 2 00:09:02.186 } 00:09:02.186 ], 00:09:02.186 "driver_specific": {} 00:09:02.186 } 00:09:02.186 ] 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 [2024-11-29 07:40:52.075574] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.186 [2024-11-29 07:40:52.075616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.186 [2024-11-29 07:40:52.075636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.186 [2024-11-29 07:40:52.077450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.186 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.187 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.445 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.445 "name": "Existed_Raid", 00:09:02.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.445 "strip_size_kb": 64, 00:09:02.445 "state": "configuring", 00:09:02.445 "raid_level": "raid0", 00:09:02.445 "superblock": false, 00:09:02.445 "num_base_bdevs": 3, 00:09:02.445 "num_base_bdevs_discovered": 2, 00:09:02.445 "num_base_bdevs_operational": 3, 00:09:02.445 "base_bdevs_list": [ 00:09:02.445 { 00:09:02.445 "name": "BaseBdev1", 00:09:02.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.445 "is_configured": false, 00:09:02.445 "data_offset": 0, 00:09:02.445 "data_size": 0 00:09:02.445 }, 00:09:02.445 { 00:09:02.445 "name": "BaseBdev2", 00:09:02.445 "uuid": "a6c86a40-564a-4769-8bbb-f0212ef68503", 00:09:02.445 "is_configured": true, 00:09:02.445 "data_offset": 0, 00:09:02.445 "data_size": 65536 00:09:02.445 }, 00:09:02.445 { 00:09:02.445 "name": "BaseBdev3", 00:09:02.445 "uuid": "128bc1ec-f9f9-4608-8a91-6721e7648043", 00:09:02.445 "is_configured": true, 00:09:02.445 "data_offset": 0, 00:09:02.445 "data_size": 65536 00:09:02.445 } 00:09:02.445 ] 00:09:02.445 }' 00:09:02.445 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.445 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.705 [2024-11-29 07:40:52.510889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.705 "name": "Existed_Raid", 00:09:02.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.705 "strip_size_kb": 64, 00:09:02.705 "state": "configuring", 00:09:02.705 "raid_level": "raid0", 00:09:02.705 "superblock": false, 00:09:02.705 "num_base_bdevs": 3, 00:09:02.705 "num_base_bdevs_discovered": 1, 00:09:02.705 "num_base_bdevs_operational": 3, 00:09:02.705 "base_bdevs_list": [ 00:09:02.705 { 00:09:02.705 "name": "BaseBdev1", 00:09:02.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.705 "is_configured": false, 00:09:02.705 "data_offset": 0, 00:09:02.705 "data_size": 0 00:09:02.705 }, 00:09:02.705 { 00:09:02.705 "name": null, 00:09:02.705 "uuid": "a6c86a40-564a-4769-8bbb-f0212ef68503", 00:09:02.705 "is_configured": false, 00:09:02.705 "data_offset": 0, 00:09:02.705 "data_size": 65536 00:09:02.705 }, 00:09:02.705 { 00:09:02.705 "name": "BaseBdev3", 00:09:02.705 "uuid": "128bc1ec-f9f9-4608-8a91-6721e7648043", 00:09:02.705 "is_configured": true, 00:09:02.705 "data_offset": 0, 00:09:02.705 "data_size": 65536 00:09:02.705 } 00:09:02.705 ] 00:09:02.705 }' 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.705 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.273 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:03.274 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.274 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.274 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.274 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.274 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:03.274 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.274 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.274 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.274 [2024-11-29 07:40:53.006891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.274 BaseBdev1 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.274 [ 00:09:03.274 { 00:09:03.274 "name": "BaseBdev1", 00:09:03.274 "aliases": [ 00:09:03.274 "f7616e89-617b-4c3a-b20f-7be302bbedc2" 00:09:03.274 ], 00:09:03.274 "product_name": "Malloc disk", 00:09:03.274 "block_size": 512, 00:09:03.274 "num_blocks": 65536, 00:09:03.274 "uuid": "f7616e89-617b-4c3a-b20f-7be302bbedc2", 00:09:03.274 "assigned_rate_limits": { 00:09:03.274 "rw_ios_per_sec": 0, 00:09:03.274 "rw_mbytes_per_sec": 0, 00:09:03.274 "r_mbytes_per_sec": 0, 00:09:03.274 "w_mbytes_per_sec": 0 00:09:03.274 }, 00:09:03.274 "claimed": true, 00:09:03.274 "claim_type": "exclusive_write", 00:09:03.274 "zoned": false, 00:09:03.274 "supported_io_types": { 00:09:03.274 "read": true, 00:09:03.274 "write": true, 00:09:03.274 "unmap": true, 00:09:03.274 "flush": true, 00:09:03.274 "reset": true, 00:09:03.274 "nvme_admin": false, 00:09:03.274 "nvme_io": false, 00:09:03.274 "nvme_io_md": false, 00:09:03.274 "write_zeroes": true, 00:09:03.274 "zcopy": true, 00:09:03.274 "get_zone_info": false, 00:09:03.274 "zone_management": false, 00:09:03.274 "zone_append": false, 00:09:03.274 "compare": false, 00:09:03.274 "compare_and_write": false, 00:09:03.274 "abort": true, 00:09:03.274 "seek_hole": false, 00:09:03.274 "seek_data": false, 00:09:03.274 "copy": true, 00:09:03.274 "nvme_iov_md": false 00:09:03.274 }, 00:09:03.274 "memory_domains": [ 00:09:03.274 { 00:09:03.274 "dma_device_id": "system", 00:09:03.274 "dma_device_type": 1 00:09:03.274 }, 00:09:03.274 { 00:09:03.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.274 "dma_device_type": 2 00:09:03.274 } 00:09:03.274 ], 00:09:03.274 "driver_specific": {} 00:09:03.274 } 00:09:03.274 ] 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.274 "name": "Existed_Raid", 00:09:03.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.274 "strip_size_kb": 64, 00:09:03.274 "state": "configuring", 00:09:03.274 "raid_level": "raid0", 00:09:03.274 "superblock": false, 00:09:03.274 "num_base_bdevs": 3, 00:09:03.274 "num_base_bdevs_discovered": 2, 00:09:03.274 "num_base_bdevs_operational": 3, 00:09:03.274 "base_bdevs_list": [ 00:09:03.274 { 00:09:03.274 "name": "BaseBdev1", 00:09:03.274 "uuid": "f7616e89-617b-4c3a-b20f-7be302bbedc2", 00:09:03.274 "is_configured": true, 00:09:03.274 "data_offset": 0, 00:09:03.274 "data_size": 65536 00:09:03.274 }, 00:09:03.274 { 00:09:03.274 "name": null, 00:09:03.274 "uuid": "a6c86a40-564a-4769-8bbb-f0212ef68503", 00:09:03.274 "is_configured": false, 00:09:03.274 "data_offset": 0, 00:09:03.274 "data_size": 65536 00:09:03.274 }, 00:09:03.274 { 00:09:03.274 "name": "BaseBdev3", 00:09:03.274 "uuid": "128bc1ec-f9f9-4608-8a91-6721e7648043", 00:09:03.274 "is_configured": true, 00:09:03.274 "data_offset": 0, 00:09:03.274 "data_size": 65536 00:09:03.274 } 00:09:03.274 ] 00:09:03.274 }' 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.274 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.533 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.533 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.533 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.792 [2024-11-29 07:40:53.530113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.792 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.793 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.793 "name": "Existed_Raid", 00:09:03.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.793 "strip_size_kb": 64, 00:09:03.793 "state": "configuring", 00:09:03.793 "raid_level": "raid0", 00:09:03.793 "superblock": false, 00:09:03.793 "num_base_bdevs": 3, 00:09:03.793 "num_base_bdevs_discovered": 1, 00:09:03.793 "num_base_bdevs_operational": 3, 00:09:03.793 "base_bdevs_list": [ 00:09:03.793 { 00:09:03.793 "name": "BaseBdev1", 00:09:03.793 "uuid": "f7616e89-617b-4c3a-b20f-7be302bbedc2", 00:09:03.793 "is_configured": true, 00:09:03.793 "data_offset": 0, 00:09:03.793 "data_size": 65536 00:09:03.793 }, 00:09:03.793 { 00:09:03.793 "name": null, 00:09:03.793 "uuid": "a6c86a40-564a-4769-8bbb-f0212ef68503", 00:09:03.793 "is_configured": false, 00:09:03.793 "data_offset": 0, 00:09:03.793 "data_size": 65536 00:09:03.793 }, 00:09:03.793 { 00:09:03.793 "name": null, 00:09:03.793 "uuid": "128bc1ec-f9f9-4608-8a91-6721e7648043", 00:09:03.793 "is_configured": false, 00:09:03.793 "data_offset": 0, 00:09:03.793 "data_size": 65536 00:09:03.793 } 00:09:03.793 ] 00:09:03.793 }' 00:09:03.793 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.793 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.052 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.052 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.052 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.052 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.052 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.311 [2024-11-29 07:40:54.017280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.311 "name": "Existed_Raid", 00:09:04.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.311 "strip_size_kb": 64, 00:09:04.311 "state": "configuring", 00:09:04.311 "raid_level": "raid0", 00:09:04.311 "superblock": false, 00:09:04.311 "num_base_bdevs": 3, 00:09:04.311 "num_base_bdevs_discovered": 2, 00:09:04.311 "num_base_bdevs_operational": 3, 00:09:04.311 "base_bdevs_list": [ 00:09:04.311 { 00:09:04.311 "name": "BaseBdev1", 00:09:04.311 "uuid": "f7616e89-617b-4c3a-b20f-7be302bbedc2", 00:09:04.311 "is_configured": true, 00:09:04.311 "data_offset": 0, 00:09:04.311 "data_size": 65536 00:09:04.311 }, 00:09:04.311 { 00:09:04.311 "name": null, 00:09:04.311 "uuid": "a6c86a40-564a-4769-8bbb-f0212ef68503", 00:09:04.311 "is_configured": false, 00:09:04.311 "data_offset": 0, 00:09:04.311 "data_size": 65536 00:09:04.311 }, 00:09:04.311 { 00:09:04.311 "name": "BaseBdev3", 00:09:04.311 "uuid": "128bc1ec-f9f9-4608-8a91-6721e7648043", 00:09:04.311 "is_configured": true, 00:09:04.311 "data_offset": 0, 00:09:04.311 "data_size": 65536 00:09:04.311 } 00:09:04.311 ] 00:09:04.311 }' 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.311 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.570 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.570 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.570 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.570 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.570 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.570 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:04.570 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:04.570 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.570 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.570 [2024-11-29 07:40:54.476512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.829 "name": "Existed_Raid", 00:09:04.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.829 "strip_size_kb": 64, 00:09:04.829 "state": "configuring", 00:09:04.829 "raid_level": "raid0", 00:09:04.829 "superblock": false, 00:09:04.829 "num_base_bdevs": 3, 00:09:04.829 "num_base_bdevs_discovered": 1, 00:09:04.829 "num_base_bdevs_operational": 3, 00:09:04.829 "base_bdevs_list": [ 00:09:04.829 { 00:09:04.829 "name": null, 00:09:04.829 "uuid": "f7616e89-617b-4c3a-b20f-7be302bbedc2", 00:09:04.829 "is_configured": false, 00:09:04.829 "data_offset": 0, 00:09:04.829 "data_size": 65536 00:09:04.829 }, 00:09:04.829 { 00:09:04.829 "name": null, 00:09:04.829 "uuid": "a6c86a40-564a-4769-8bbb-f0212ef68503", 00:09:04.829 "is_configured": false, 00:09:04.829 "data_offset": 0, 00:09:04.829 "data_size": 65536 00:09:04.829 }, 00:09:04.829 { 00:09:04.829 "name": "BaseBdev3", 00:09:04.829 "uuid": "128bc1ec-f9f9-4608-8a91-6721e7648043", 00:09:04.829 "is_configured": true, 00:09:04.829 "data_offset": 0, 00:09:04.829 "data_size": 65536 00:09:04.829 } 00:09:04.829 ] 00:09:04.829 }' 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.829 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.089 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.089 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:05.089 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.089 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.089 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.347 [2024-11-29 07:40:55.049682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.347 "name": "Existed_Raid", 00:09:05.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.347 "strip_size_kb": 64, 00:09:05.347 "state": "configuring", 00:09:05.347 "raid_level": "raid0", 00:09:05.347 "superblock": false, 00:09:05.347 "num_base_bdevs": 3, 00:09:05.347 "num_base_bdevs_discovered": 2, 00:09:05.347 "num_base_bdevs_operational": 3, 00:09:05.347 "base_bdevs_list": [ 00:09:05.347 { 00:09:05.347 "name": null, 00:09:05.347 "uuid": "f7616e89-617b-4c3a-b20f-7be302bbedc2", 00:09:05.347 "is_configured": false, 00:09:05.347 "data_offset": 0, 00:09:05.347 "data_size": 65536 00:09:05.347 }, 00:09:05.347 { 00:09:05.347 "name": "BaseBdev2", 00:09:05.347 "uuid": "a6c86a40-564a-4769-8bbb-f0212ef68503", 00:09:05.347 "is_configured": true, 00:09:05.347 "data_offset": 0, 00:09:05.347 "data_size": 65536 00:09:05.347 }, 00:09:05.347 { 00:09:05.347 "name": "BaseBdev3", 00:09:05.347 "uuid": "128bc1ec-f9f9-4608-8a91-6721e7648043", 00:09:05.347 "is_configured": true, 00:09:05.347 "data_offset": 0, 00:09:05.347 "data_size": 65536 00:09:05.347 } 00:09:05.347 ] 00:09:05.347 }' 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.347 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f7616e89-617b-4c3a-b20f-7be302bbedc2 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.605 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.864 [2024-11-29 07:40:55.581588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:05.864 [2024-11-29 07:40:55.581635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:05.864 [2024-11-29 07:40:55.581660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:05.864 [2024-11-29 07:40:55.581906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:05.864 [2024-11-29 07:40:55.582081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:05.864 [2024-11-29 07:40:55.582113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:05.864 [2024-11-29 07:40:55.582377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.864 NewBaseBdev 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.864 [ 00:09:05.864 { 00:09:05.864 "name": "NewBaseBdev", 00:09:05.864 "aliases": [ 00:09:05.864 "f7616e89-617b-4c3a-b20f-7be302bbedc2" 00:09:05.864 ], 00:09:05.864 "product_name": "Malloc disk", 00:09:05.864 "block_size": 512, 00:09:05.864 "num_blocks": 65536, 00:09:05.864 "uuid": "f7616e89-617b-4c3a-b20f-7be302bbedc2", 00:09:05.864 "assigned_rate_limits": { 00:09:05.864 "rw_ios_per_sec": 0, 00:09:05.864 "rw_mbytes_per_sec": 0, 00:09:05.864 "r_mbytes_per_sec": 0, 00:09:05.864 "w_mbytes_per_sec": 0 00:09:05.864 }, 00:09:05.864 "claimed": true, 00:09:05.864 "claim_type": "exclusive_write", 00:09:05.864 "zoned": false, 00:09:05.864 "supported_io_types": { 00:09:05.864 "read": true, 00:09:05.864 "write": true, 00:09:05.864 "unmap": true, 00:09:05.864 "flush": true, 00:09:05.864 "reset": true, 00:09:05.864 "nvme_admin": false, 00:09:05.864 "nvme_io": false, 00:09:05.864 "nvme_io_md": false, 00:09:05.864 "write_zeroes": true, 00:09:05.864 "zcopy": true, 00:09:05.864 "get_zone_info": false, 00:09:05.864 "zone_management": false, 00:09:05.864 "zone_append": false, 00:09:05.864 "compare": false, 00:09:05.864 "compare_and_write": false, 00:09:05.864 "abort": true, 00:09:05.864 "seek_hole": false, 00:09:05.864 "seek_data": false, 00:09:05.864 "copy": true, 00:09:05.864 "nvme_iov_md": false 00:09:05.864 }, 00:09:05.864 "memory_domains": [ 00:09:05.864 { 00:09:05.864 "dma_device_id": "system", 00:09:05.864 "dma_device_type": 1 00:09:05.864 }, 00:09:05.864 { 00:09:05.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.864 "dma_device_type": 2 00:09:05.864 } 00:09:05.864 ], 00:09:05.864 "driver_specific": {} 00:09:05.864 } 00:09:05.864 ] 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.864 "name": "Existed_Raid", 00:09:05.864 "uuid": "7299d60a-be8c-41b4-a159-c612d04f2591", 00:09:05.864 "strip_size_kb": 64, 00:09:05.864 "state": "online", 00:09:05.864 "raid_level": "raid0", 00:09:05.864 "superblock": false, 00:09:05.864 "num_base_bdevs": 3, 00:09:05.864 "num_base_bdevs_discovered": 3, 00:09:05.864 "num_base_bdevs_operational": 3, 00:09:05.864 "base_bdevs_list": [ 00:09:05.864 { 00:09:05.864 "name": "NewBaseBdev", 00:09:05.864 "uuid": "f7616e89-617b-4c3a-b20f-7be302bbedc2", 00:09:05.864 "is_configured": true, 00:09:05.864 "data_offset": 0, 00:09:05.864 "data_size": 65536 00:09:05.864 }, 00:09:05.864 { 00:09:05.864 "name": "BaseBdev2", 00:09:05.864 "uuid": "a6c86a40-564a-4769-8bbb-f0212ef68503", 00:09:05.864 "is_configured": true, 00:09:05.864 "data_offset": 0, 00:09:05.864 "data_size": 65536 00:09:05.864 }, 00:09:05.864 { 00:09:05.864 "name": "BaseBdev3", 00:09:05.864 "uuid": "128bc1ec-f9f9-4608-8a91-6721e7648043", 00:09:05.864 "is_configured": true, 00:09:05.864 "data_offset": 0, 00:09:05.864 "data_size": 65536 00:09:05.864 } 00:09:05.864 ] 00:09:05.864 }' 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.864 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.123 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.123 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.123 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.123 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.123 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.123 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.123 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.123 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.123 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.123 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.123 [2024-11-29 07:40:56.041196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.123 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.383 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.383 "name": "Existed_Raid", 00:09:06.383 "aliases": [ 00:09:06.383 "7299d60a-be8c-41b4-a159-c612d04f2591" 00:09:06.383 ], 00:09:06.383 "product_name": "Raid Volume", 00:09:06.383 "block_size": 512, 00:09:06.383 "num_blocks": 196608, 00:09:06.383 "uuid": "7299d60a-be8c-41b4-a159-c612d04f2591", 00:09:06.383 "assigned_rate_limits": { 00:09:06.383 "rw_ios_per_sec": 0, 00:09:06.383 "rw_mbytes_per_sec": 0, 00:09:06.383 "r_mbytes_per_sec": 0, 00:09:06.383 "w_mbytes_per_sec": 0 00:09:06.383 }, 00:09:06.383 "claimed": false, 00:09:06.383 "zoned": false, 00:09:06.383 "supported_io_types": { 00:09:06.383 "read": true, 00:09:06.383 "write": true, 00:09:06.383 "unmap": true, 00:09:06.383 "flush": true, 00:09:06.383 "reset": true, 00:09:06.383 "nvme_admin": false, 00:09:06.383 "nvme_io": false, 00:09:06.383 "nvme_io_md": false, 00:09:06.383 "write_zeroes": true, 00:09:06.383 "zcopy": false, 00:09:06.383 "get_zone_info": false, 00:09:06.383 "zone_management": false, 00:09:06.383 "zone_append": false, 00:09:06.383 "compare": false, 00:09:06.383 "compare_and_write": false, 00:09:06.383 "abort": false, 00:09:06.383 "seek_hole": false, 00:09:06.383 "seek_data": false, 00:09:06.383 "copy": false, 00:09:06.383 "nvme_iov_md": false 00:09:06.383 }, 00:09:06.383 "memory_domains": [ 00:09:06.383 { 00:09:06.383 "dma_device_id": "system", 00:09:06.383 "dma_device_type": 1 00:09:06.383 }, 00:09:06.383 { 00:09:06.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.383 "dma_device_type": 2 00:09:06.383 }, 00:09:06.383 { 00:09:06.383 "dma_device_id": "system", 00:09:06.383 "dma_device_type": 1 00:09:06.383 }, 00:09:06.383 { 00:09:06.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.383 "dma_device_type": 2 00:09:06.383 }, 00:09:06.383 { 00:09:06.383 "dma_device_id": "system", 00:09:06.383 "dma_device_type": 1 00:09:06.383 }, 00:09:06.383 { 00:09:06.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.383 "dma_device_type": 2 00:09:06.383 } 00:09:06.383 ], 00:09:06.383 "driver_specific": { 00:09:06.383 "raid": { 00:09:06.383 "uuid": "7299d60a-be8c-41b4-a159-c612d04f2591", 00:09:06.383 "strip_size_kb": 64, 00:09:06.383 "state": "online", 00:09:06.383 "raid_level": "raid0", 00:09:06.383 "superblock": false, 00:09:06.383 "num_base_bdevs": 3, 00:09:06.383 "num_base_bdevs_discovered": 3, 00:09:06.383 "num_base_bdevs_operational": 3, 00:09:06.383 "base_bdevs_list": [ 00:09:06.383 { 00:09:06.383 "name": "NewBaseBdev", 00:09:06.383 "uuid": "f7616e89-617b-4c3a-b20f-7be302bbedc2", 00:09:06.383 "is_configured": true, 00:09:06.383 "data_offset": 0, 00:09:06.383 "data_size": 65536 00:09:06.383 }, 00:09:06.383 { 00:09:06.383 "name": "BaseBdev2", 00:09:06.383 "uuid": "a6c86a40-564a-4769-8bbb-f0212ef68503", 00:09:06.383 "is_configured": true, 00:09:06.383 "data_offset": 0, 00:09:06.383 "data_size": 65536 00:09:06.383 }, 00:09:06.383 { 00:09:06.383 "name": "BaseBdev3", 00:09:06.383 "uuid": "128bc1ec-f9f9-4608-8a91-6721e7648043", 00:09:06.383 "is_configured": true, 00:09:06.383 "data_offset": 0, 00:09:06.383 "data_size": 65536 00:09:06.383 } 00:09:06.383 ] 00:09:06.383 } 00:09:06.383 } 00:09:06.383 }' 00:09:06.383 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.383 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:06.384 BaseBdev2 00:09:06.384 BaseBdev3' 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.384 [2024-11-29 07:40:56.312417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.384 [2024-11-29 07:40:56.312447] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.384 [2024-11-29 07:40:56.312519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.384 [2024-11-29 07:40:56.312572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.384 [2024-11-29 07:40:56.312585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63637 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63637 ']' 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63637 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.384 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63637 00:09:06.643 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.643 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.643 killing process with pid 63637 00:09:06.643 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63637' 00:09:06.643 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63637 00:09:06.643 [2024-11-29 07:40:56.355776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.643 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63637 00:09:06.901 [2024-11-29 07:40:56.646784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.837 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:07.837 00:09:07.837 real 0m10.217s 00:09:07.837 user 0m16.324s 00:09:07.837 sys 0m1.710s 00:09:07.837 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.837 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.837 ************************************ 00:09:07.837 END TEST raid_state_function_test 00:09:07.837 ************************************ 00:09:08.096 07:40:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:08.096 07:40:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:08.096 07:40:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.096 07:40:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.096 ************************************ 00:09:08.096 START TEST raid_state_function_test_sb 00:09:08.096 ************************************ 00:09:08.096 07:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64258 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:08.097 Process raid pid: 64258 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64258' 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64258 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64258 ']' 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.097 07:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.097 [2024-11-29 07:40:57.901806] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:08.097 [2024-11-29 07:40:57.901926] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.355 [2024-11-29 07:40:58.075683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.355 [2024-11-29 07:40:58.184407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.613 [2024-11-29 07:40:58.383304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.613 [2024-11-29 07:40:58.383354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.871 [2024-11-29 07:40:58.755739] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.871 [2024-11-29 07:40:58.755799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.871 [2024-11-29 07:40:58.755810] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.871 [2024-11-29 07:40:58.755820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.871 [2024-11-29 07:40:58.755827] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.871 [2024-11-29 07:40:58.755836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.871 "name": "Existed_Raid", 00:09:08.871 "uuid": "fb63010c-9b8e-4c51-a20b-d5ff5029e8e6", 00:09:08.871 "strip_size_kb": 64, 00:09:08.871 "state": "configuring", 00:09:08.871 "raid_level": "raid0", 00:09:08.871 "superblock": true, 00:09:08.871 "num_base_bdevs": 3, 00:09:08.871 "num_base_bdevs_discovered": 0, 00:09:08.871 "num_base_bdevs_operational": 3, 00:09:08.871 "base_bdevs_list": [ 00:09:08.871 { 00:09:08.871 "name": "BaseBdev1", 00:09:08.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.871 "is_configured": false, 00:09:08.871 "data_offset": 0, 00:09:08.871 "data_size": 0 00:09:08.871 }, 00:09:08.871 { 00:09:08.871 "name": "BaseBdev2", 00:09:08.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.871 "is_configured": false, 00:09:08.871 "data_offset": 0, 00:09:08.871 "data_size": 0 00:09:08.871 }, 00:09:08.871 { 00:09:08.871 "name": "BaseBdev3", 00:09:08.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.871 "is_configured": false, 00:09:08.871 "data_offset": 0, 00:09:08.871 "data_size": 0 00:09:08.871 } 00:09:08.871 ] 00:09:08.871 }' 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.871 07:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.436 [2024-11-29 07:40:59.170974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.436 [2024-11-29 07:40:59.171027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.436 [2024-11-29 07:40:59.178946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.436 [2024-11-29 07:40:59.178989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.436 [2024-11-29 07:40:59.178998] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.436 [2024-11-29 07:40:59.179007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.436 [2024-11-29 07:40:59.179013] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.436 [2024-11-29 07:40:59.179021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.436 [2024-11-29 07:40:59.221399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.436 BaseBdev1 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.436 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.437 [ 00:09:09.437 { 00:09:09.437 "name": "BaseBdev1", 00:09:09.437 "aliases": [ 00:09:09.437 "ea46adf9-aa8b-40a3-9901-723b459ab6a5" 00:09:09.437 ], 00:09:09.437 "product_name": "Malloc disk", 00:09:09.437 "block_size": 512, 00:09:09.437 "num_blocks": 65536, 00:09:09.437 "uuid": "ea46adf9-aa8b-40a3-9901-723b459ab6a5", 00:09:09.437 "assigned_rate_limits": { 00:09:09.437 "rw_ios_per_sec": 0, 00:09:09.437 "rw_mbytes_per_sec": 0, 00:09:09.437 "r_mbytes_per_sec": 0, 00:09:09.437 "w_mbytes_per_sec": 0 00:09:09.437 }, 00:09:09.437 "claimed": true, 00:09:09.437 "claim_type": "exclusive_write", 00:09:09.437 "zoned": false, 00:09:09.437 "supported_io_types": { 00:09:09.437 "read": true, 00:09:09.437 "write": true, 00:09:09.437 "unmap": true, 00:09:09.437 "flush": true, 00:09:09.437 "reset": true, 00:09:09.437 "nvme_admin": false, 00:09:09.437 "nvme_io": false, 00:09:09.437 "nvme_io_md": false, 00:09:09.437 "write_zeroes": true, 00:09:09.437 "zcopy": true, 00:09:09.437 "get_zone_info": false, 00:09:09.437 "zone_management": false, 00:09:09.437 "zone_append": false, 00:09:09.437 "compare": false, 00:09:09.437 "compare_and_write": false, 00:09:09.437 "abort": true, 00:09:09.437 "seek_hole": false, 00:09:09.437 "seek_data": false, 00:09:09.437 "copy": true, 00:09:09.437 "nvme_iov_md": false 00:09:09.437 }, 00:09:09.437 "memory_domains": [ 00:09:09.437 { 00:09:09.437 "dma_device_id": "system", 00:09:09.437 "dma_device_type": 1 00:09:09.437 }, 00:09:09.437 { 00:09:09.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.437 "dma_device_type": 2 00:09:09.437 } 00:09:09.437 ], 00:09:09.437 "driver_specific": {} 00:09:09.437 } 00:09:09.437 ] 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.437 "name": "Existed_Raid", 00:09:09.437 "uuid": "a3f2eacc-69c4-4db4-bf31-c7157e2be923", 00:09:09.437 "strip_size_kb": 64, 00:09:09.437 "state": "configuring", 00:09:09.437 "raid_level": "raid0", 00:09:09.437 "superblock": true, 00:09:09.437 "num_base_bdevs": 3, 00:09:09.437 "num_base_bdevs_discovered": 1, 00:09:09.437 "num_base_bdevs_operational": 3, 00:09:09.437 "base_bdevs_list": [ 00:09:09.437 { 00:09:09.437 "name": "BaseBdev1", 00:09:09.437 "uuid": "ea46adf9-aa8b-40a3-9901-723b459ab6a5", 00:09:09.437 "is_configured": true, 00:09:09.437 "data_offset": 2048, 00:09:09.437 "data_size": 63488 00:09:09.437 }, 00:09:09.437 { 00:09:09.437 "name": "BaseBdev2", 00:09:09.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.437 "is_configured": false, 00:09:09.437 "data_offset": 0, 00:09:09.437 "data_size": 0 00:09:09.437 }, 00:09:09.437 { 00:09:09.437 "name": "BaseBdev3", 00:09:09.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.437 "is_configured": false, 00:09:09.437 "data_offset": 0, 00:09:09.437 "data_size": 0 00:09:09.437 } 00:09:09.437 ] 00:09:09.437 }' 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.437 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.001 [2024-11-29 07:40:59.680647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.001 [2024-11-29 07:40:59.680703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.001 [2024-11-29 07:40:59.688686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.001 [2024-11-29 07:40:59.690541] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.001 [2024-11-29 07:40:59.690583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.001 [2024-11-29 07:40:59.690592] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:10.001 [2024-11-29 07:40:59.690601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.001 "name": "Existed_Raid", 00:09:10.001 "uuid": "33402edc-b04e-45e1-945b-962ca441fc05", 00:09:10.001 "strip_size_kb": 64, 00:09:10.001 "state": "configuring", 00:09:10.001 "raid_level": "raid0", 00:09:10.001 "superblock": true, 00:09:10.001 "num_base_bdevs": 3, 00:09:10.001 "num_base_bdevs_discovered": 1, 00:09:10.001 "num_base_bdevs_operational": 3, 00:09:10.001 "base_bdevs_list": [ 00:09:10.001 { 00:09:10.001 "name": "BaseBdev1", 00:09:10.001 "uuid": "ea46adf9-aa8b-40a3-9901-723b459ab6a5", 00:09:10.001 "is_configured": true, 00:09:10.001 "data_offset": 2048, 00:09:10.001 "data_size": 63488 00:09:10.001 }, 00:09:10.001 { 00:09:10.001 "name": "BaseBdev2", 00:09:10.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.001 "is_configured": false, 00:09:10.001 "data_offset": 0, 00:09:10.001 "data_size": 0 00:09:10.001 }, 00:09:10.001 { 00:09:10.001 "name": "BaseBdev3", 00:09:10.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.001 "is_configured": false, 00:09:10.001 "data_offset": 0, 00:09:10.001 "data_size": 0 00:09:10.001 } 00:09:10.001 ] 00:09:10.001 }' 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.001 07:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.259 [2024-11-29 07:41:00.155245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.259 BaseBdev2 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.259 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.259 [ 00:09:10.259 { 00:09:10.259 "name": "BaseBdev2", 00:09:10.259 "aliases": [ 00:09:10.259 "5b3442ad-e5d8-438f-9dc6-532da0eac7a2" 00:09:10.259 ], 00:09:10.259 "product_name": "Malloc disk", 00:09:10.260 "block_size": 512, 00:09:10.260 "num_blocks": 65536, 00:09:10.260 "uuid": "5b3442ad-e5d8-438f-9dc6-532da0eac7a2", 00:09:10.260 "assigned_rate_limits": { 00:09:10.260 "rw_ios_per_sec": 0, 00:09:10.260 "rw_mbytes_per_sec": 0, 00:09:10.260 "r_mbytes_per_sec": 0, 00:09:10.260 "w_mbytes_per_sec": 0 00:09:10.260 }, 00:09:10.260 "claimed": true, 00:09:10.260 "claim_type": "exclusive_write", 00:09:10.260 "zoned": false, 00:09:10.260 "supported_io_types": { 00:09:10.260 "read": true, 00:09:10.260 "write": true, 00:09:10.260 "unmap": true, 00:09:10.260 "flush": true, 00:09:10.260 "reset": true, 00:09:10.260 "nvme_admin": false, 00:09:10.260 "nvme_io": false, 00:09:10.260 "nvme_io_md": false, 00:09:10.260 "write_zeroes": true, 00:09:10.260 "zcopy": true, 00:09:10.260 "get_zone_info": false, 00:09:10.260 "zone_management": false, 00:09:10.260 "zone_append": false, 00:09:10.260 "compare": false, 00:09:10.260 "compare_and_write": false, 00:09:10.260 "abort": true, 00:09:10.260 "seek_hole": false, 00:09:10.260 "seek_data": false, 00:09:10.260 "copy": true, 00:09:10.260 "nvme_iov_md": false 00:09:10.260 }, 00:09:10.260 "memory_domains": [ 00:09:10.260 { 00:09:10.260 "dma_device_id": "system", 00:09:10.260 "dma_device_type": 1 00:09:10.260 }, 00:09:10.260 { 00:09:10.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.260 "dma_device_type": 2 00:09:10.260 } 00:09:10.260 ], 00:09:10.260 "driver_specific": {} 00:09:10.260 } 00:09:10.260 ] 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.260 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.519 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.519 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.519 "name": "Existed_Raid", 00:09:10.519 "uuid": "33402edc-b04e-45e1-945b-962ca441fc05", 00:09:10.519 "strip_size_kb": 64, 00:09:10.519 "state": "configuring", 00:09:10.519 "raid_level": "raid0", 00:09:10.519 "superblock": true, 00:09:10.519 "num_base_bdevs": 3, 00:09:10.519 "num_base_bdevs_discovered": 2, 00:09:10.519 "num_base_bdevs_operational": 3, 00:09:10.519 "base_bdevs_list": [ 00:09:10.519 { 00:09:10.519 "name": "BaseBdev1", 00:09:10.519 "uuid": "ea46adf9-aa8b-40a3-9901-723b459ab6a5", 00:09:10.519 "is_configured": true, 00:09:10.519 "data_offset": 2048, 00:09:10.519 "data_size": 63488 00:09:10.519 }, 00:09:10.519 { 00:09:10.519 "name": "BaseBdev2", 00:09:10.519 "uuid": "5b3442ad-e5d8-438f-9dc6-532da0eac7a2", 00:09:10.519 "is_configured": true, 00:09:10.519 "data_offset": 2048, 00:09:10.519 "data_size": 63488 00:09:10.519 }, 00:09:10.519 { 00:09:10.519 "name": "BaseBdev3", 00:09:10.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.519 "is_configured": false, 00:09:10.519 "data_offset": 0, 00:09:10.519 "data_size": 0 00:09:10.519 } 00:09:10.519 ] 00:09:10.519 }' 00:09:10.519 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.519 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.777 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.777 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.777 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.777 [2024-11-29 07:41:00.652570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.777 [2024-11-29 07:41:00.652823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:10.777 [2024-11-29 07:41:00.652843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.777 [2024-11-29 07:41:00.653127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:10.777 [2024-11-29 07:41:00.653294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:10.777 [2024-11-29 07:41:00.653305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:10.777 [2024-11-29 07:41:00.653448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.777 BaseBdev3 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.778 [ 00:09:10.778 { 00:09:10.778 "name": "BaseBdev3", 00:09:10.778 "aliases": [ 00:09:10.778 "7961e6ba-664b-4c8b-8c47-77658980b6c7" 00:09:10.778 ], 00:09:10.778 "product_name": "Malloc disk", 00:09:10.778 "block_size": 512, 00:09:10.778 "num_blocks": 65536, 00:09:10.778 "uuid": "7961e6ba-664b-4c8b-8c47-77658980b6c7", 00:09:10.778 "assigned_rate_limits": { 00:09:10.778 "rw_ios_per_sec": 0, 00:09:10.778 "rw_mbytes_per_sec": 0, 00:09:10.778 "r_mbytes_per_sec": 0, 00:09:10.778 "w_mbytes_per_sec": 0 00:09:10.778 }, 00:09:10.778 "claimed": true, 00:09:10.778 "claim_type": "exclusive_write", 00:09:10.778 "zoned": false, 00:09:10.778 "supported_io_types": { 00:09:10.778 "read": true, 00:09:10.778 "write": true, 00:09:10.778 "unmap": true, 00:09:10.778 "flush": true, 00:09:10.778 "reset": true, 00:09:10.778 "nvme_admin": false, 00:09:10.778 "nvme_io": false, 00:09:10.778 "nvme_io_md": false, 00:09:10.778 "write_zeroes": true, 00:09:10.778 "zcopy": true, 00:09:10.778 "get_zone_info": false, 00:09:10.778 "zone_management": false, 00:09:10.778 "zone_append": false, 00:09:10.778 "compare": false, 00:09:10.778 "compare_and_write": false, 00:09:10.778 "abort": true, 00:09:10.778 "seek_hole": false, 00:09:10.778 "seek_data": false, 00:09:10.778 "copy": true, 00:09:10.778 "nvme_iov_md": false 00:09:10.778 }, 00:09:10.778 "memory_domains": [ 00:09:10.778 { 00:09:10.778 "dma_device_id": "system", 00:09:10.778 "dma_device_type": 1 00:09:10.778 }, 00:09:10.778 { 00:09:10.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.778 "dma_device_type": 2 00:09:10.778 } 00:09:10.778 ], 00:09:10.778 "driver_specific": {} 00:09:10.778 } 00:09:10.778 ] 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.778 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.037 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.037 "name": "Existed_Raid", 00:09:11.037 "uuid": "33402edc-b04e-45e1-945b-962ca441fc05", 00:09:11.037 "strip_size_kb": 64, 00:09:11.038 "state": "online", 00:09:11.038 "raid_level": "raid0", 00:09:11.038 "superblock": true, 00:09:11.038 "num_base_bdevs": 3, 00:09:11.038 "num_base_bdevs_discovered": 3, 00:09:11.038 "num_base_bdevs_operational": 3, 00:09:11.038 "base_bdevs_list": [ 00:09:11.038 { 00:09:11.038 "name": "BaseBdev1", 00:09:11.038 "uuid": "ea46adf9-aa8b-40a3-9901-723b459ab6a5", 00:09:11.038 "is_configured": true, 00:09:11.038 "data_offset": 2048, 00:09:11.038 "data_size": 63488 00:09:11.038 }, 00:09:11.038 { 00:09:11.038 "name": "BaseBdev2", 00:09:11.038 "uuid": "5b3442ad-e5d8-438f-9dc6-532da0eac7a2", 00:09:11.038 "is_configured": true, 00:09:11.038 "data_offset": 2048, 00:09:11.038 "data_size": 63488 00:09:11.038 }, 00:09:11.038 { 00:09:11.038 "name": "BaseBdev3", 00:09:11.038 "uuid": "7961e6ba-664b-4c8b-8c47-77658980b6c7", 00:09:11.038 "is_configured": true, 00:09:11.038 "data_offset": 2048, 00:09:11.038 "data_size": 63488 00:09:11.038 } 00:09:11.038 ] 00:09:11.038 }' 00:09:11.038 07:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.038 07:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.297 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.297 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.297 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.297 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.297 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.297 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.297 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.297 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.297 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.297 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.297 [2024-11-29 07:41:01.136147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.297 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.297 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.297 "name": "Existed_Raid", 00:09:11.297 "aliases": [ 00:09:11.297 "33402edc-b04e-45e1-945b-962ca441fc05" 00:09:11.297 ], 00:09:11.297 "product_name": "Raid Volume", 00:09:11.297 "block_size": 512, 00:09:11.297 "num_blocks": 190464, 00:09:11.297 "uuid": "33402edc-b04e-45e1-945b-962ca441fc05", 00:09:11.297 "assigned_rate_limits": { 00:09:11.297 "rw_ios_per_sec": 0, 00:09:11.297 "rw_mbytes_per_sec": 0, 00:09:11.297 "r_mbytes_per_sec": 0, 00:09:11.297 "w_mbytes_per_sec": 0 00:09:11.297 }, 00:09:11.297 "claimed": false, 00:09:11.297 "zoned": false, 00:09:11.297 "supported_io_types": { 00:09:11.297 "read": true, 00:09:11.297 "write": true, 00:09:11.297 "unmap": true, 00:09:11.297 "flush": true, 00:09:11.297 "reset": true, 00:09:11.297 "nvme_admin": false, 00:09:11.297 "nvme_io": false, 00:09:11.297 "nvme_io_md": false, 00:09:11.297 "write_zeroes": true, 00:09:11.297 "zcopy": false, 00:09:11.297 "get_zone_info": false, 00:09:11.297 "zone_management": false, 00:09:11.297 "zone_append": false, 00:09:11.297 "compare": false, 00:09:11.297 "compare_and_write": false, 00:09:11.297 "abort": false, 00:09:11.297 "seek_hole": false, 00:09:11.297 "seek_data": false, 00:09:11.297 "copy": false, 00:09:11.297 "nvme_iov_md": false 00:09:11.297 }, 00:09:11.297 "memory_domains": [ 00:09:11.297 { 00:09:11.297 "dma_device_id": "system", 00:09:11.297 "dma_device_type": 1 00:09:11.297 }, 00:09:11.297 { 00:09:11.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.297 "dma_device_type": 2 00:09:11.297 }, 00:09:11.297 { 00:09:11.297 "dma_device_id": "system", 00:09:11.297 "dma_device_type": 1 00:09:11.297 }, 00:09:11.297 { 00:09:11.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.297 "dma_device_type": 2 00:09:11.297 }, 00:09:11.297 { 00:09:11.297 "dma_device_id": "system", 00:09:11.297 "dma_device_type": 1 00:09:11.297 }, 00:09:11.297 { 00:09:11.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.297 "dma_device_type": 2 00:09:11.297 } 00:09:11.297 ], 00:09:11.297 "driver_specific": { 00:09:11.297 "raid": { 00:09:11.298 "uuid": "33402edc-b04e-45e1-945b-962ca441fc05", 00:09:11.298 "strip_size_kb": 64, 00:09:11.298 "state": "online", 00:09:11.298 "raid_level": "raid0", 00:09:11.298 "superblock": true, 00:09:11.298 "num_base_bdevs": 3, 00:09:11.298 "num_base_bdevs_discovered": 3, 00:09:11.298 "num_base_bdevs_operational": 3, 00:09:11.298 "base_bdevs_list": [ 00:09:11.298 { 00:09:11.298 "name": "BaseBdev1", 00:09:11.298 "uuid": "ea46adf9-aa8b-40a3-9901-723b459ab6a5", 00:09:11.298 "is_configured": true, 00:09:11.298 "data_offset": 2048, 00:09:11.298 "data_size": 63488 00:09:11.298 }, 00:09:11.298 { 00:09:11.298 "name": "BaseBdev2", 00:09:11.298 "uuid": "5b3442ad-e5d8-438f-9dc6-532da0eac7a2", 00:09:11.298 "is_configured": true, 00:09:11.298 "data_offset": 2048, 00:09:11.298 "data_size": 63488 00:09:11.298 }, 00:09:11.298 { 00:09:11.298 "name": "BaseBdev3", 00:09:11.298 "uuid": "7961e6ba-664b-4c8b-8c47-77658980b6c7", 00:09:11.298 "is_configured": true, 00:09:11.298 "data_offset": 2048, 00:09:11.298 "data_size": 63488 00:09:11.298 } 00:09:11.298 ] 00:09:11.298 } 00:09:11.298 } 00:09:11.298 }' 00:09:11.298 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.298 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.298 BaseBdev2 00:09:11.298 BaseBdev3' 00:09:11.298 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.557 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.557 [2024-11-29 07:41:01.391441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.558 [2024-11-29 07:41:01.391486] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.558 [2024-11-29 07:41:01.391541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.558 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.817 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.817 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.817 "name": "Existed_Raid", 00:09:11.817 "uuid": "33402edc-b04e-45e1-945b-962ca441fc05", 00:09:11.817 "strip_size_kb": 64, 00:09:11.817 "state": "offline", 00:09:11.817 "raid_level": "raid0", 00:09:11.817 "superblock": true, 00:09:11.817 "num_base_bdevs": 3, 00:09:11.817 "num_base_bdevs_discovered": 2, 00:09:11.817 "num_base_bdevs_operational": 2, 00:09:11.817 "base_bdevs_list": [ 00:09:11.817 { 00:09:11.817 "name": null, 00:09:11.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.817 "is_configured": false, 00:09:11.817 "data_offset": 0, 00:09:11.817 "data_size": 63488 00:09:11.817 }, 00:09:11.817 { 00:09:11.817 "name": "BaseBdev2", 00:09:11.817 "uuid": "5b3442ad-e5d8-438f-9dc6-532da0eac7a2", 00:09:11.817 "is_configured": true, 00:09:11.817 "data_offset": 2048, 00:09:11.817 "data_size": 63488 00:09:11.817 }, 00:09:11.817 { 00:09:11.817 "name": "BaseBdev3", 00:09:11.817 "uuid": "7961e6ba-664b-4c8b-8c47-77658980b6c7", 00:09:11.817 "is_configured": true, 00:09:11.817 "data_offset": 2048, 00:09:11.817 "data_size": 63488 00:09:11.817 } 00:09:11.817 ] 00:09:11.817 }' 00:09:11.817 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.817 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.076 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:12.076 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.076 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.076 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.076 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.076 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.076 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.076 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.076 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.076 07:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:12.076 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.076 07:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.076 [2024-11-29 07:41:02.002411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.335 [2024-11-29 07:41:02.153477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.335 [2024-11-29 07:41:02.153529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.335 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.594 BaseBdev2 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.594 [ 00:09:12.594 { 00:09:12.594 "name": "BaseBdev2", 00:09:12.594 "aliases": [ 00:09:12.594 "e3af91ff-b60d-456b-a70b-b6226341d176" 00:09:12.594 ], 00:09:12.594 "product_name": "Malloc disk", 00:09:12.594 "block_size": 512, 00:09:12.594 "num_blocks": 65536, 00:09:12.594 "uuid": "e3af91ff-b60d-456b-a70b-b6226341d176", 00:09:12.594 "assigned_rate_limits": { 00:09:12.594 "rw_ios_per_sec": 0, 00:09:12.594 "rw_mbytes_per_sec": 0, 00:09:12.594 "r_mbytes_per_sec": 0, 00:09:12.594 "w_mbytes_per_sec": 0 00:09:12.594 }, 00:09:12.594 "claimed": false, 00:09:12.594 "zoned": false, 00:09:12.594 "supported_io_types": { 00:09:12.594 "read": true, 00:09:12.594 "write": true, 00:09:12.594 "unmap": true, 00:09:12.594 "flush": true, 00:09:12.594 "reset": true, 00:09:12.594 "nvme_admin": false, 00:09:12.594 "nvme_io": false, 00:09:12.594 "nvme_io_md": false, 00:09:12.594 "write_zeroes": true, 00:09:12.594 "zcopy": true, 00:09:12.594 "get_zone_info": false, 00:09:12.594 "zone_management": false, 00:09:12.594 "zone_append": false, 00:09:12.594 "compare": false, 00:09:12.594 "compare_and_write": false, 00:09:12.594 "abort": true, 00:09:12.594 "seek_hole": false, 00:09:12.594 "seek_data": false, 00:09:12.594 "copy": true, 00:09:12.594 "nvme_iov_md": false 00:09:12.594 }, 00:09:12.594 "memory_domains": [ 00:09:12.594 { 00:09:12.594 "dma_device_id": "system", 00:09:12.594 "dma_device_type": 1 00:09:12.594 }, 00:09:12.594 { 00:09:12.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.594 "dma_device_type": 2 00:09:12.594 } 00:09:12.594 ], 00:09:12.594 "driver_specific": {} 00:09:12.594 } 00:09:12.594 ] 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.594 BaseBdev3 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:12.594 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.595 [ 00:09:12.595 { 00:09:12.595 "name": "BaseBdev3", 00:09:12.595 "aliases": [ 00:09:12.595 "98d35dad-c3f8-45f6-be74-35e5146598e8" 00:09:12.595 ], 00:09:12.595 "product_name": "Malloc disk", 00:09:12.595 "block_size": 512, 00:09:12.595 "num_blocks": 65536, 00:09:12.595 "uuid": "98d35dad-c3f8-45f6-be74-35e5146598e8", 00:09:12.595 "assigned_rate_limits": { 00:09:12.595 "rw_ios_per_sec": 0, 00:09:12.595 "rw_mbytes_per_sec": 0, 00:09:12.595 "r_mbytes_per_sec": 0, 00:09:12.595 "w_mbytes_per_sec": 0 00:09:12.595 }, 00:09:12.595 "claimed": false, 00:09:12.595 "zoned": false, 00:09:12.595 "supported_io_types": { 00:09:12.595 "read": true, 00:09:12.595 "write": true, 00:09:12.595 "unmap": true, 00:09:12.595 "flush": true, 00:09:12.595 "reset": true, 00:09:12.595 "nvme_admin": false, 00:09:12.595 "nvme_io": false, 00:09:12.595 "nvme_io_md": false, 00:09:12.595 "write_zeroes": true, 00:09:12.595 "zcopy": true, 00:09:12.595 "get_zone_info": false, 00:09:12.595 "zone_management": false, 00:09:12.595 "zone_append": false, 00:09:12.595 "compare": false, 00:09:12.595 "compare_and_write": false, 00:09:12.595 "abort": true, 00:09:12.595 "seek_hole": false, 00:09:12.595 "seek_data": false, 00:09:12.595 "copy": true, 00:09:12.595 "nvme_iov_md": false 00:09:12.595 }, 00:09:12.595 "memory_domains": [ 00:09:12.595 { 00:09:12.595 "dma_device_id": "system", 00:09:12.595 "dma_device_type": 1 00:09:12.595 }, 00:09:12.595 { 00:09:12.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.595 "dma_device_type": 2 00:09:12.595 } 00:09:12.595 ], 00:09:12.595 "driver_specific": {} 00:09:12.595 } 00:09:12.595 ] 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.595 [2024-11-29 07:41:02.466891] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.595 [2024-11-29 07:41:02.466979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.595 [2024-11-29 07:41:02.467024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.595 [2024-11-29 07:41:02.468830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.595 "name": "Existed_Raid", 00:09:12.595 "uuid": "55b51ae8-9de7-4000-a6e5-eea9f7731243", 00:09:12.595 "strip_size_kb": 64, 00:09:12.595 "state": "configuring", 00:09:12.595 "raid_level": "raid0", 00:09:12.595 "superblock": true, 00:09:12.595 "num_base_bdevs": 3, 00:09:12.595 "num_base_bdevs_discovered": 2, 00:09:12.595 "num_base_bdevs_operational": 3, 00:09:12.595 "base_bdevs_list": [ 00:09:12.595 { 00:09:12.595 "name": "BaseBdev1", 00:09:12.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.595 "is_configured": false, 00:09:12.595 "data_offset": 0, 00:09:12.595 "data_size": 0 00:09:12.595 }, 00:09:12.595 { 00:09:12.595 "name": "BaseBdev2", 00:09:12.595 "uuid": "e3af91ff-b60d-456b-a70b-b6226341d176", 00:09:12.595 "is_configured": true, 00:09:12.595 "data_offset": 2048, 00:09:12.595 "data_size": 63488 00:09:12.595 }, 00:09:12.595 { 00:09:12.595 "name": "BaseBdev3", 00:09:12.595 "uuid": "98d35dad-c3f8-45f6-be74-35e5146598e8", 00:09:12.595 "is_configured": true, 00:09:12.595 "data_offset": 2048, 00:09:12.595 "data_size": 63488 00:09:12.595 } 00:09:12.595 ] 00:09:12.595 }' 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.595 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.161 [2024-11-29 07:41:02.938158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.161 "name": "Existed_Raid", 00:09:13.161 "uuid": "55b51ae8-9de7-4000-a6e5-eea9f7731243", 00:09:13.161 "strip_size_kb": 64, 00:09:13.161 "state": "configuring", 00:09:13.161 "raid_level": "raid0", 00:09:13.161 "superblock": true, 00:09:13.161 "num_base_bdevs": 3, 00:09:13.161 "num_base_bdevs_discovered": 1, 00:09:13.161 "num_base_bdevs_operational": 3, 00:09:13.161 "base_bdevs_list": [ 00:09:13.161 { 00:09:13.161 "name": "BaseBdev1", 00:09:13.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.161 "is_configured": false, 00:09:13.161 "data_offset": 0, 00:09:13.161 "data_size": 0 00:09:13.161 }, 00:09:13.161 { 00:09:13.161 "name": null, 00:09:13.161 "uuid": "e3af91ff-b60d-456b-a70b-b6226341d176", 00:09:13.161 "is_configured": false, 00:09:13.161 "data_offset": 0, 00:09:13.161 "data_size": 63488 00:09:13.161 }, 00:09:13.161 { 00:09:13.161 "name": "BaseBdev3", 00:09:13.161 "uuid": "98d35dad-c3f8-45f6-be74-35e5146598e8", 00:09:13.161 "is_configured": true, 00:09:13.161 "data_offset": 2048, 00:09:13.161 "data_size": 63488 00:09:13.161 } 00:09:13.161 ] 00:09:13.161 }' 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.161 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 [2024-11-29 07:41:03.496792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.729 BaseBdev1 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 [ 00:09:13.729 { 00:09:13.729 "name": "BaseBdev1", 00:09:13.729 "aliases": [ 00:09:13.729 "0ccb6ab4-7824-4b5b-a78d-d842c5535d6f" 00:09:13.729 ], 00:09:13.729 "product_name": "Malloc disk", 00:09:13.729 "block_size": 512, 00:09:13.729 "num_blocks": 65536, 00:09:13.729 "uuid": "0ccb6ab4-7824-4b5b-a78d-d842c5535d6f", 00:09:13.729 "assigned_rate_limits": { 00:09:13.729 "rw_ios_per_sec": 0, 00:09:13.729 "rw_mbytes_per_sec": 0, 00:09:13.729 "r_mbytes_per_sec": 0, 00:09:13.729 "w_mbytes_per_sec": 0 00:09:13.729 }, 00:09:13.729 "claimed": true, 00:09:13.729 "claim_type": "exclusive_write", 00:09:13.729 "zoned": false, 00:09:13.729 "supported_io_types": { 00:09:13.729 "read": true, 00:09:13.729 "write": true, 00:09:13.729 "unmap": true, 00:09:13.729 "flush": true, 00:09:13.729 "reset": true, 00:09:13.729 "nvme_admin": false, 00:09:13.729 "nvme_io": false, 00:09:13.729 "nvme_io_md": false, 00:09:13.729 "write_zeroes": true, 00:09:13.729 "zcopy": true, 00:09:13.729 "get_zone_info": false, 00:09:13.729 "zone_management": false, 00:09:13.729 "zone_append": false, 00:09:13.729 "compare": false, 00:09:13.729 "compare_and_write": false, 00:09:13.729 "abort": true, 00:09:13.729 "seek_hole": false, 00:09:13.729 "seek_data": false, 00:09:13.729 "copy": true, 00:09:13.729 "nvme_iov_md": false 00:09:13.729 }, 00:09:13.729 "memory_domains": [ 00:09:13.729 { 00:09:13.729 "dma_device_id": "system", 00:09:13.729 "dma_device_type": 1 00:09:13.729 }, 00:09:13.729 { 00:09:13.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.729 "dma_device_type": 2 00:09:13.729 } 00:09:13.729 ], 00:09:13.729 "driver_specific": {} 00:09:13.729 } 00:09:13.729 ] 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.729 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.729 "name": "Existed_Raid", 00:09:13.729 "uuid": "55b51ae8-9de7-4000-a6e5-eea9f7731243", 00:09:13.729 "strip_size_kb": 64, 00:09:13.729 "state": "configuring", 00:09:13.729 "raid_level": "raid0", 00:09:13.729 "superblock": true, 00:09:13.729 "num_base_bdevs": 3, 00:09:13.729 "num_base_bdevs_discovered": 2, 00:09:13.729 "num_base_bdevs_operational": 3, 00:09:13.729 "base_bdevs_list": [ 00:09:13.729 { 00:09:13.729 "name": "BaseBdev1", 00:09:13.729 "uuid": "0ccb6ab4-7824-4b5b-a78d-d842c5535d6f", 00:09:13.729 "is_configured": true, 00:09:13.729 "data_offset": 2048, 00:09:13.729 "data_size": 63488 00:09:13.729 }, 00:09:13.729 { 00:09:13.730 "name": null, 00:09:13.730 "uuid": "e3af91ff-b60d-456b-a70b-b6226341d176", 00:09:13.730 "is_configured": false, 00:09:13.730 "data_offset": 0, 00:09:13.730 "data_size": 63488 00:09:13.730 }, 00:09:13.730 { 00:09:13.730 "name": "BaseBdev3", 00:09:13.730 "uuid": "98d35dad-c3f8-45f6-be74-35e5146598e8", 00:09:13.730 "is_configured": true, 00:09:13.730 "data_offset": 2048, 00:09:13.730 "data_size": 63488 00:09:13.730 } 00:09:13.730 ] 00:09:13.730 }' 00:09:13.730 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.730 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.297 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:14.297 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.297 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.297 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.297 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.297 [2024-11-29 07:41:04.007978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.297 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.297 "name": "Existed_Raid", 00:09:14.297 "uuid": "55b51ae8-9de7-4000-a6e5-eea9f7731243", 00:09:14.297 "strip_size_kb": 64, 00:09:14.297 "state": "configuring", 00:09:14.297 "raid_level": "raid0", 00:09:14.297 "superblock": true, 00:09:14.297 "num_base_bdevs": 3, 00:09:14.297 "num_base_bdevs_discovered": 1, 00:09:14.297 "num_base_bdevs_operational": 3, 00:09:14.297 "base_bdevs_list": [ 00:09:14.297 { 00:09:14.297 "name": "BaseBdev1", 00:09:14.297 "uuid": "0ccb6ab4-7824-4b5b-a78d-d842c5535d6f", 00:09:14.297 "is_configured": true, 00:09:14.297 "data_offset": 2048, 00:09:14.297 "data_size": 63488 00:09:14.298 }, 00:09:14.298 { 00:09:14.298 "name": null, 00:09:14.298 "uuid": "e3af91ff-b60d-456b-a70b-b6226341d176", 00:09:14.298 "is_configured": false, 00:09:14.298 "data_offset": 0, 00:09:14.298 "data_size": 63488 00:09:14.298 }, 00:09:14.298 { 00:09:14.298 "name": null, 00:09:14.298 "uuid": "98d35dad-c3f8-45f6-be74-35e5146598e8", 00:09:14.298 "is_configured": false, 00:09:14.298 "data_offset": 0, 00:09:14.298 "data_size": 63488 00:09:14.298 } 00:09:14.298 ] 00:09:14.298 }' 00:09:14.298 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.298 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.556 [2024-11-29 07:41:04.467300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.556 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.814 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.814 "name": "Existed_Raid", 00:09:14.814 "uuid": "55b51ae8-9de7-4000-a6e5-eea9f7731243", 00:09:14.814 "strip_size_kb": 64, 00:09:14.814 "state": "configuring", 00:09:14.814 "raid_level": "raid0", 00:09:14.814 "superblock": true, 00:09:14.814 "num_base_bdevs": 3, 00:09:14.814 "num_base_bdevs_discovered": 2, 00:09:14.814 "num_base_bdevs_operational": 3, 00:09:14.814 "base_bdevs_list": [ 00:09:14.814 { 00:09:14.814 "name": "BaseBdev1", 00:09:14.814 "uuid": "0ccb6ab4-7824-4b5b-a78d-d842c5535d6f", 00:09:14.814 "is_configured": true, 00:09:14.814 "data_offset": 2048, 00:09:14.814 "data_size": 63488 00:09:14.814 }, 00:09:14.814 { 00:09:14.814 "name": null, 00:09:14.814 "uuid": "e3af91ff-b60d-456b-a70b-b6226341d176", 00:09:14.814 "is_configured": false, 00:09:14.814 "data_offset": 0, 00:09:14.814 "data_size": 63488 00:09:14.814 }, 00:09:14.814 { 00:09:14.814 "name": "BaseBdev3", 00:09:14.814 "uuid": "98d35dad-c3f8-45f6-be74-35e5146598e8", 00:09:14.814 "is_configured": true, 00:09:14.814 "data_offset": 2048, 00:09:14.814 "data_size": 63488 00:09:14.814 } 00:09:14.814 ] 00:09:14.814 }' 00:09:14.814 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.814 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.074 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.074 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.074 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.074 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:15.074 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.074 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:15.074 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:15.074 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.074 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.074 [2024-11-29 07:41:04.922507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.074 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.074 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.074 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.333 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.333 "name": "Existed_Raid", 00:09:15.333 "uuid": "55b51ae8-9de7-4000-a6e5-eea9f7731243", 00:09:15.333 "strip_size_kb": 64, 00:09:15.333 "state": "configuring", 00:09:15.333 "raid_level": "raid0", 00:09:15.333 "superblock": true, 00:09:15.333 "num_base_bdevs": 3, 00:09:15.333 "num_base_bdevs_discovered": 1, 00:09:15.333 "num_base_bdevs_operational": 3, 00:09:15.333 "base_bdevs_list": [ 00:09:15.333 { 00:09:15.333 "name": null, 00:09:15.333 "uuid": "0ccb6ab4-7824-4b5b-a78d-d842c5535d6f", 00:09:15.333 "is_configured": false, 00:09:15.333 "data_offset": 0, 00:09:15.333 "data_size": 63488 00:09:15.333 }, 00:09:15.333 { 00:09:15.333 "name": null, 00:09:15.333 "uuid": "e3af91ff-b60d-456b-a70b-b6226341d176", 00:09:15.333 "is_configured": false, 00:09:15.333 "data_offset": 0, 00:09:15.333 "data_size": 63488 00:09:15.333 }, 00:09:15.333 { 00:09:15.333 "name": "BaseBdev3", 00:09:15.333 "uuid": "98d35dad-c3f8-45f6-be74-35e5146598e8", 00:09:15.333 "is_configured": true, 00:09:15.333 "data_offset": 2048, 00:09:15.333 "data_size": 63488 00:09:15.333 } 00:09:15.333 ] 00:09:15.333 }' 00:09:15.334 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.334 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.591 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.591 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.591 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.591 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.591 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.591 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.591 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.591 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.591 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.591 [2024-11-29 07:41:05.487223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.591 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.591 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.592 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.850 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.850 "name": "Existed_Raid", 00:09:15.850 "uuid": "55b51ae8-9de7-4000-a6e5-eea9f7731243", 00:09:15.851 "strip_size_kb": 64, 00:09:15.851 "state": "configuring", 00:09:15.851 "raid_level": "raid0", 00:09:15.851 "superblock": true, 00:09:15.851 "num_base_bdevs": 3, 00:09:15.851 "num_base_bdevs_discovered": 2, 00:09:15.851 "num_base_bdevs_operational": 3, 00:09:15.851 "base_bdevs_list": [ 00:09:15.851 { 00:09:15.851 "name": null, 00:09:15.851 "uuid": "0ccb6ab4-7824-4b5b-a78d-d842c5535d6f", 00:09:15.851 "is_configured": false, 00:09:15.851 "data_offset": 0, 00:09:15.851 "data_size": 63488 00:09:15.851 }, 00:09:15.851 { 00:09:15.851 "name": "BaseBdev2", 00:09:15.851 "uuid": "e3af91ff-b60d-456b-a70b-b6226341d176", 00:09:15.851 "is_configured": true, 00:09:15.851 "data_offset": 2048, 00:09:15.851 "data_size": 63488 00:09:15.851 }, 00:09:15.851 { 00:09:15.851 "name": "BaseBdev3", 00:09:15.851 "uuid": "98d35dad-c3f8-45f6-be74-35e5146598e8", 00:09:15.851 "is_configured": true, 00:09:15.851 "data_offset": 2048, 00:09:15.851 "data_size": 63488 00:09:15.851 } 00:09:15.851 ] 00:09:15.851 }' 00:09:15.851 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.851 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.109 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.109 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.109 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.109 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:16.109 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.109 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:16.109 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.109 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:16.109 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.109 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.109 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.109 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0ccb6ab4-7824-4b5b-a78d-d842c5535d6f 00:09:16.109 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.109 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.109 [2024-11-29 07:41:06.041395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:16.109 [2024-11-29 07:41:06.041687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:16.109 [2024-11-29 07:41:06.041746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:16.109 [2024-11-29 07:41:06.042017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:16.109 [2024-11-29 07:41:06.042222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:16.109 [2024-11-29 07:41:06.042266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:09:16.109 id_bdev 0x617000008200 00:09:16.109 [2024-11-29 07:41:06.042445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.109 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.109 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:16.109 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:16.109 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.110 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:16.110 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.110 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.110 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.110 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.110 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.368 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.368 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:16.368 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.368 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.368 [ 00:09:16.368 { 00:09:16.368 "name": "NewBaseBdev", 00:09:16.368 "aliases": [ 00:09:16.368 "0ccb6ab4-7824-4b5b-a78d-d842c5535d6f" 00:09:16.368 ], 00:09:16.368 "product_name": "Malloc disk", 00:09:16.368 "block_size": 512, 00:09:16.368 "num_blocks": 65536, 00:09:16.368 "uuid": "0ccb6ab4-7824-4b5b-a78d-d842c5535d6f", 00:09:16.368 "assigned_rate_limits": { 00:09:16.369 "rw_ios_per_sec": 0, 00:09:16.369 "rw_mbytes_per_sec": 0, 00:09:16.369 "r_mbytes_per_sec": 0, 00:09:16.369 "w_mbytes_per_sec": 0 00:09:16.369 }, 00:09:16.369 "claimed": true, 00:09:16.369 "claim_type": "exclusive_write", 00:09:16.369 "zoned": false, 00:09:16.369 "supported_io_types": { 00:09:16.369 "read": true, 00:09:16.369 "write": true, 00:09:16.369 "unmap": true, 00:09:16.369 "flush": true, 00:09:16.369 "reset": true, 00:09:16.369 "nvme_admin": false, 00:09:16.369 "nvme_io": false, 00:09:16.369 "nvme_io_md": false, 00:09:16.369 "write_zeroes": true, 00:09:16.369 "zcopy": true, 00:09:16.369 "get_zone_info": false, 00:09:16.369 "zone_management": false, 00:09:16.369 "zone_append": false, 00:09:16.369 "compare": false, 00:09:16.369 "compare_and_write": false, 00:09:16.369 "abort": true, 00:09:16.369 "seek_hole": false, 00:09:16.369 "seek_data": false, 00:09:16.369 "copy": true, 00:09:16.369 "nvme_iov_md": false 00:09:16.369 }, 00:09:16.369 "memory_domains": [ 00:09:16.369 { 00:09:16.369 "dma_device_id": "system", 00:09:16.369 "dma_device_type": 1 00:09:16.369 }, 00:09:16.369 { 00:09:16.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.369 "dma_device_type": 2 00:09:16.369 } 00:09:16.369 ], 00:09:16.369 "driver_specific": {} 00:09:16.369 } 00:09:16.369 ] 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.369 "name": "Existed_Raid", 00:09:16.369 "uuid": "55b51ae8-9de7-4000-a6e5-eea9f7731243", 00:09:16.369 "strip_size_kb": 64, 00:09:16.369 "state": "online", 00:09:16.369 "raid_level": "raid0", 00:09:16.369 "superblock": true, 00:09:16.369 "num_base_bdevs": 3, 00:09:16.369 "num_base_bdevs_discovered": 3, 00:09:16.369 "num_base_bdevs_operational": 3, 00:09:16.369 "base_bdevs_list": [ 00:09:16.369 { 00:09:16.369 "name": "NewBaseBdev", 00:09:16.369 "uuid": "0ccb6ab4-7824-4b5b-a78d-d842c5535d6f", 00:09:16.369 "is_configured": true, 00:09:16.369 "data_offset": 2048, 00:09:16.369 "data_size": 63488 00:09:16.369 }, 00:09:16.369 { 00:09:16.369 "name": "BaseBdev2", 00:09:16.369 "uuid": "e3af91ff-b60d-456b-a70b-b6226341d176", 00:09:16.369 "is_configured": true, 00:09:16.369 "data_offset": 2048, 00:09:16.369 "data_size": 63488 00:09:16.369 }, 00:09:16.369 { 00:09:16.369 "name": "BaseBdev3", 00:09:16.369 "uuid": "98d35dad-c3f8-45f6-be74-35e5146598e8", 00:09:16.369 "is_configured": true, 00:09:16.369 "data_offset": 2048, 00:09:16.369 "data_size": 63488 00:09:16.369 } 00:09:16.369 ] 00:09:16.369 }' 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.369 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.629 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.629 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.629 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.629 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.629 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.629 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.629 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.629 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.629 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.629 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.629 [2024-11-29 07:41:06.516932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.629 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.630 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.630 "name": "Existed_Raid", 00:09:16.630 "aliases": [ 00:09:16.630 "55b51ae8-9de7-4000-a6e5-eea9f7731243" 00:09:16.630 ], 00:09:16.630 "product_name": "Raid Volume", 00:09:16.630 "block_size": 512, 00:09:16.630 "num_blocks": 190464, 00:09:16.630 "uuid": "55b51ae8-9de7-4000-a6e5-eea9f7731243", 00:09:16.630 "assigned_rate_limits": { 00:09:16.630 "rw_ios_per_sec": 0, 00:09:16.630 "rw_mbytes_per_sec": 0, 00:09:16.630 "r_mbytes_per_sec": 0, 00:09:16.630 "w_mbytes_per_sec": 0 00:09:16.630 }, 00:09:16.630 "claimed": false, 00:09:16.630 "zoned": false, 00:09:16.630 "supported_io_types": { 00:09:16.630 "read": true, 00:09:16.630 "write": true, 00:09:16.630 "unmap": true, 00:09:16.630 "flush": true, 00:09:16.630 "reset": true, 00:09:16.630 "nvme_admin": false, 00:09:16.630 "nvme_io": false, 00:09:16.630 "nvme_io_md": false, 00:09:16.630 "write_zeroes": true, 00:09:16.630 "zcopy": false, 00:09:16.630 "get_zone_info": false, 00:09:16.630 "zone_management": false, 00:09:16.630 "zone_append": false, 00:09:16.630 "compare": false, 00:09:16.630 "compare_and_write": false, 00:09:16.630 "abort": false, 00:09:16.630 "seek_hole": false, 00:09:16.630 "seek_data": false, 00:09:16.630 "copy": false, 00:09:16.630 "nvme_iov_md": false 00:09:16.630 }, 00:09:16.630 "memory_domains": [ 00:09:16.630 { 00:09:16.630 "dma_device_id": "system", 00:09:16.630 "dma_device_type": 1 00:09:16.630 }, 00:09:16.630 { 00:09:16.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.630 "dma_device_type": 2 00:09:16.630 }, 00:09:16.630 { 00:09:16.630 "dma_device_id": "system", 00:09:16.630 "dma_device_type": 1 00:09:16.630 }, 00:09:16.630 { 00:09:16.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.630 "dma_device_type": 2 00:09:16.630 }, 00:09:16.630 { 00:09:16.630 "dma_device_id": "system", 00:09:16.630 "dma_device_type": 1 00:09:16.630 }, 00:09:16.630 { 00:09:16.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.630 "dma_device_type": 2 00:09:16.630 } 00:09:16.630 ], 00:09:16.630 "driver_specific": { 00:09:16.630 "raid": { 00:09:16.630 "uuid": "55b51ae8-9de7-4000-a6e5-eea9f7731243", 00:09:16.630 "strip_size_kb": 64, 00:09:16.630 "state": "online", 00:09:16.630 "raid_level": "raid0", 00:09:16.630 "superblock": true, 00:09:16.630 "num_base_bdevs": 3, 00:09:16.630 "num_base_bdevs_discovered": 3, 00:09:16.630 "num_base_bdevs_operational": 3, 00:09:16.630 "base_bdevs_list": [ 00:09:16.630 { 00:09:16.630 "name": "NewBaseBdev", 00:09:16.630 "uuid": "0ccb6ab4-7824-4b5b-a78d-d842c5535d6f", 00:09:16.630 "is_configured": true, 00:09:16.630 "data_offset": 2048, 00:09:16.630 "data_size": 63488 00:09:16.630 }, 00:09:16.630 { 00:09:16.630 "name": "BaseBdev2", 00:09:16.630 "uuid": "e3af91ff-b60d-456b-a70b-b6226341d176", 00:09:16.630 "is_configured": true, 00:09:16.630 "data_offset": 2048, 00:09:16.630 "data_size": 63488 00:09:16.630 }, 00:09:16.630 { 00:09:16.630 "name": "BaseBdev3", 00:09:16.630 "uuid": "98d35dad-c3f8-45f6-be74-35e5146598e8", 00:09:16.630 "is_configured": true, 00:09:16.630 "data_offset": 2048, 00:09:16.630 "data_size": 63488 00:09:16.630 } 00:09:16.630 ] 00:09:16.630 } 00:09:16.630 } 00:09:16.630 }' 00:09:16.630 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:16.930 BaseBdev2 00:09:16.930 BaseBdev3' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.930 [2024-11-29 07:41:06.768179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.930 [2024-11-29 07:41:06.768206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.930 [2024-11-29 07:41:06.768281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.930 [2024-11-29 07:41:06.768333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.930 [2024-11-29 07:41:06.768344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64258 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64258 ']' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64258 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64258 00:09:16.930 killing process with pid 64258 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64258' 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64258 00:09:16.930 [2024-11-29 07:41:06.813686] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.930 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64258 00:09:17.190 [2024-11-29 07:41:07.111089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.567 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:18.567 00:09:18.567 real 0m10.401s 00:09:18.567 user 0m16.601s 00:09:18.567 sys 0m1.730s 00:09:18.567 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.567 ************************************ 00:09:18.567 END TEST raid_state_function_test_sb 00:09:18.567 ************************************ 00:09:18.567 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.567 07:41:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:18.567 07:41:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:18.567 07:41:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.567 07:41:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.567 ************************************ 00:09:18.567 START TEST raid_superblock_test 00:09:18.567 ************************************ 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64878 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64878 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64878 ']' 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.567 07:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.567 [2024-11-29 07:41:08.355496] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:18.567 [2024-11-29 07:41:08.355703] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64878 ] 00:09:18.827 [2024-11-29 07:41:08.527831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.827 [2024-11-29 07:41:08.639099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.086 [2024-11-29 07:41:08.836792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.086 [2024-11-29 07:41:08.836967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.344 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.345 malloc1 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.345 [2024-11-29 07:41:09.235144] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:19.345 [2024-11-29 07:41:09.235239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.345 [2024-11-29 07:41:09.235295] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:19.345 [2024-11-29 07:41:09.235323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.345 [2024-11-29 07:41:09.237473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.345 [2024-11-29 07:41:09.237543] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:19.345 pt1 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.345 malloc2 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.345 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.605 [2024-11-29 07:41:09.293774] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.605 [2024-11-29 07:41:09.293826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.605 [2024-11-29 07:41:09.293868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:19.605 [2024-11-29 07:41:09.293877] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.605 [2024-11-29 07:41:09.295935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.605 [2024-11-29 07:41:09.295972] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.605 pt2 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.605 malloc3 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.605 [2024-11-29 07:41:09.361149] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:19.605 [2024-11-29 07:41:09.361234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.605 [2024-11-29 07:41:09.361287] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:19.605 [2024-11-29 07:41:09.361315] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.605 [2024-11-29 07:41:09.363348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.605 [2024-11-29 07:41:09.363422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:19.605 pt3 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.605 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.605 [2024-11-29 07:41:09.373173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:19.605 [2024-11-29 07:41:09.374917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.605 [2024-11-29 07:41:09.375019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:19.605 [2024-11-29 07:41:09.375220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:19.606 [2024-11-29 07:41:09.375265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:19.606 [2024-11-29 07:41:09.375539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.606 [2024-11-29 07:41:09.375736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:19.606 [2024-11-29 07:41:09.375776] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:19.606 [2024-11-29 07:41:09.375969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.606 "name": "raid_bdev1", 00:09:19.606 "uuid": "246c4f83-05e5-40d1-8797-271c4c1feca3", 00:09:19.606 "strip_size_kb": 64, 00:09:19.606 "state": "online", 00:09:19.606 "raid_level": "raid0", 00:09:19.606 "superblock": true, 00:09:19.606 "num_base_bdevs": 3, 00:09:19.606 "num_base_bdevs_discovered": 3, 00:09:19.606 "num_base_bdevs_operational": 3, 00:09:19.606 "base_bdevs_list": [ 00:09:19.606 { 00:09:19.606 "name": "pt1", 00:09:19.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.606 "is_configured": true, 00:09:19.606 "data_offset": 2048, 00:09:19.606 "data_size": 63488 00:09:19.606 }, 00:09:19.606 { 00:09:19.606 "name": "pt2", 00:09:19.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.606 "is_configured": true, 00:09:19.606 "data_offset": 2048, 00:09:19.606 "data_size": 63488 00:09:19.606 }, 00:09:19.606 { 00:09:19.606 "name": "pt3", 00:09:19.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.606 "is_configured": true, 00:09:19.606 "data_offset": 2048, 00:09:19.606 "data_size": 63488 00:09:19.606 } 00:09:19.606 ] 00:09:19.606 }' 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.606 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.866 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:19.866 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:19.866 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.866 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.866 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.866 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.866 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.866 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.866 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.866 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.866 [2024-11-29 07:41:09.808807] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.125 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.125 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.125 "name": "raid_bdev1", 00:09:20.125 "aliases": [ 00:09:20.125 "246c4f83-05e5-40d1-8797-271c4c1feca3" 00:09:20.125 ], 00:09:20.125 "product_name": "Raid Volume", 00:09:20.125 "block_size": 512, 00:09:20.125 "num_blocks": 190464, 00:09:20.125 "uuid": "246c4f83-05e5-40d1-8797-271c4c1feca3", 00:09:20.125 "assigned_rate_limits": { 00:09:20.125 "rw_ios_per_sec": 0, 00:09:20.125 "rw_mbytes_per_sec": 0, 00:09:20.125 "r_mbytes_per_sec": 0, 00:09:20.125 "w_mbytes_per_sec": 0 00:09:20.125 }, 00:09:20.125 "claimed": false, 00:09:20.125 "zoned": false, 00:09:20.125 "supported_io_types": { 00:09:20.125 "read": true, 00:09:20.125 "write": true, 00:09:20.125 "unmap": true, 00:09:20.125 "flush": true, 00:09:20.125 "reset": true, 00:09:20.125 "nvme_admin": false, 00:09:20.125 "nvme_io": false, 00:09:20.125 "nvme_io_md": false, 00:09:20.125 "write_zeroes": true, 00:09:20.125 "zcopy": false, 00:09:20.125 "get_zone_info": false, 00:09:20.125 "zone_management": false, 00:09:20.125 "zone_append": false, 00:09:20.125 "compare": false, 00:09:20.125 "compare_and_write": false, 00:09:20.125 "abort": false, 00:09:20.125 "seek_hole": false, 00:09:20.125 "seek_data": false, 00:09:20.125 "copy": false, 00:09:20.125 "nvme_iov_md": false 00:09:20.125 }, 00:09:20.125 "memory_domains": [ 00:09:20.125 { 00:09:20.125 "dma_device_id": "system", 00:09:20.125 "dma_device_type": 1 00:09:20.125 }, 00:09:20.125 { 00:09:20.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.125 "dma_device_type": 2 00:09:20.125 }, 00:09:20.125 { 00:09:20.125 "dma_device_id": "system", 00:09:20.125 "dma_device_type": 1 00:09:20.125 }, 00:09:20.125 { 00:09:20.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.125 "dma_device_type": 2 00:09:20.125 }, 00:09:20.125 { 00:09:20.125 "dma_device_id": "system", 00:09:20.125 "dma_device_type": 1 00:09:20.125 }, 00:09:20.125 { 00:09:20.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.125 "dma_device_type": 2 00:09:20.125 } 00:09:20.125 ], 00:09:20.125 "driver_specific": { 00:09:20.125 "raid": { 00:09:20.125 "uuid": "246c4f83-05e5-40d1-8797-271c4c1feca3", 00:09:20.125 "strip_size_kb": 64, 00:09:20.125 "state": "online", 00:09:20.125 "raid_level": "raid0", 00:09:20.125 "superblock": true, 00:09:20.125 "num_base_bdevs": 3, 00:09:20.125 "num_base_bdevs_discovered": 3, 00:09:20.125 "num_base_bdevs_operational": 3, 00:09:20.125 "base_bdevs_list": [ 00:09:20.125 { 00:09:20.125 "name": "pt1", 00:09:20.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.125 "is_configured": true, 00:09:20.125 "data_offset": 2048, 00:09:20.125 "data_size": 63488 00:09:20.125 }, 00:09:20.125 { 00:09:20.125 "name": "pt2", 00:09:20.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.125 "is_configured": true, 00:09:20.125 "data_offset": 2048, 00:09:20.125 "data_size": 63488 00:09:20.125 }, 00:09:20.125 { 00:09:20.125 "name": "pt3", 00:09:20.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.125 "is_configured": true, 00:09:20.125 "data_offset": 2048, 00:09:20.125 "data_size": 63488 00:09:20.125 } 00:09:20.125 ] 00:09:20.125 } 00:09:20.125 } 00:09:20.125 }' 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:20.126 pt2 00:09:20.126 pt3' 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.126 07:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.126 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.126 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.126 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.126 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.126 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:20.126 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.126 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.126 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.126 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.386 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.386 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.386 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.386 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:20.386 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.386 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.386 [2024-11-29 07:41:10.096241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.386 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.386 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=246c4f83-05e5-40d1-8797-271c4c1feca3 00:09:20.386 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 246c4f83-05e5-40d1-8797-271c4c1feca3 ']' 00:09:20.386 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.386 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.386 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.386 [2024-11-29 07:41:10.139855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.386 [2024-11-29 07:41:10.139894] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.386 [2024-11-29 07:41:10.139980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.387 [2024-11-29 07:41:10.140044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.387 [2024-11-29 07:41:10.140061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.387 [2024-11-29 07:41:10.267648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:20.387 [2024-11-29 07:41:10.269480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:20.387 [2024-11-29 07:41:10.269539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:20.387 [2024-11-29 07:41:10.269590] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:20.387 [2024-11-29 07:41:10.269633] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:20.387 [2024-11-29 07:41:10.269651] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:20.387 [2024-11-29 07:41:10.269668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.387 [2024-11-29 07:41:10.269680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:20.387 request: 00:09:20.387 { 00:09:20.387 "name": "raid_bdev1", 00:09:20.387 "raid_level": "raid0", 00:09:20.387 "base_bdevs": [ 00:09:20.387 "malloc1", 00:09:20.387 "malloc2", 00:09:20.387 "malloc3" 00:09:20.387 ], 00:09:20.387 "strip_size_kb": 64, 00:09:20.387 "superblock": false, 00:09:20.387 "method": "bdev_raid_create", 00:09:20.387 "req_id": 1 00:09:20.387 } 00:09:20.387 Got JSON-RPC error response 00:09:20.387 response: 00:09:20.387 { 00:09:20.387 "code": -17, 00:09:20.387 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:20.387 } 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.387 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.647 [2024-11-29 07:41:10.331521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:20.647 [2024-11-29 07:41:10.331571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.647 [2024-11-29 07:41:10.331591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:20.647 [2024-11-29 07:41:10.331601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.647 [2024-11-29 07:41:10.333753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.647 [2024-11-29 07:41:10.333788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:20.647 [2024-11-29 07:41:10.333864] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:20.647 [2024-11-29 07:41:10.333910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:20.647 pt1 00:09:20.647 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.647 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:20.647 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.647 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.647 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.647 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.647 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.647 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.647 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.647 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.647 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.647 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.648 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.648 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.648 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.648 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.648 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.648 "name": "raid_bdev1", 00:09:20.648 "uuid": "246c4f83-05e5-40d1-8797-271c4c1feca3", 00:09:20.648 "strip_size_kb": 64, 00:09:20.648 "state": "configuring", 00:09:20.648 "raid_level": "raid0", 00:09:20.648 "superblock": true, 00:09:20.648 "num_base_bdevs": 3, 00:09:20.648 "num_base_bdevs_discovered": 1, 00:09:20.648 "num_base_bdevs_operational": 3, 00:09:20.648 "base_bdevs_list": [ 00:09:20.648 { 00:09:20.648 "name": "pt1", 00:09:20.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.648 "is_configured": true, 00:09:20.648 "data_offset": 2048, 00:09:20.648 "data_size": 63488 00:09:20.648 }, 00:09:20.648 { 00:09:20.648 "name": null, 00:09:20.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.648 "is_configured": false, 00:09:20.648 "data_offset": 2048, 00:09:20.648 "data_size": 63488 00:09:20.648 }, 00:09:20.648 { 00:09:20.648 "name": null, 00:09:20.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.648 "is_configured": false, 00:09:20.648 "data_offset": 2048, 00:09:20.648 "data_size": 63488 00:09:20.648 } 00:09:20.648 ] 00:09:20.648 }' 00:09:20.648 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.648 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.908 [2024-11-29 07:41:10.750810] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:20.908 [2024-11-29 07:41:10.750877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.908 [2024-11-29 07:41:10.750904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:20.908 [2024-11-29 07:41:10.750914] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.908 [2024-11-29 07:41:10.751355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.908 [2024-11-29 07:41:10.751374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:20.908 [2024-11-29 07:41:10.751473] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:20.908 [2024-11-29 07:41:10.751502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.908 pt2 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.908 [2024-11-29 07:41:10.758793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.908 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.909 "name": "raid_bdev1", 00:09:20.909 "uuid": "246c4f83-05e5-40d1-8797-271c4c1feca3", 00:09:20.909 "strip_size_kb": 64, 00:09:20.909 "state": "configuring", 00:09:20.909 "raid_level": "raid0", 00:09:20.909 "superblock": true, 00:09:20.909 "num_base_bdevs": 3, 00:09:20.909 "num_base_bdevs_discovered": 1, 00:09:20.909 "num_base_bdevs_operational": 3, 00:09:20.909 "base_bdevs_list": [ 00:09:20.909 { 00:09:20.909 "name": "pt1", 00:09:20.909 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.909 "is_configured": true, 00:09:20.909 "data_offset": 2048, 00:09:20.909 "data_size": 63488 00:09:20.909 }, 00:09:20.909 { 00:09:20.909 "name": null, 00:09:20.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.909 "is_configured": false, 00:09:20.909 "data_offset": 0, 00:09:20.909 "data_size": 63488 00:09:20.909 }, 00:09:20.909 { 00:09:20.909 "name": null, 00:09:20.909 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.909 "is_configured": false, 00:09:20.909 "data_offset": 2048, 00:09:20.909 "data_size": 63488 00:09:20.909 } 00:09:20.909 ] 00:09:20.909 }' 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.909 07:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.477 [2024-11-29 07:41:11.202010] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:21.477 [2024-11-29 07:41:11.202080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.477 [2024-11-29 07:41:11.202110] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:21.477 [2024-11-29 07:41:11.202121] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.477 [2024-11-29 07:41:11.202612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.477 [2024-11-29 07:41:11.202639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:21.477 [2024-11-29 07:41:11.202725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:21.477 [2024-11-29 07:41:11.202755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:21.477 pt2 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.477 [2024-11-29 07:41:11.209977] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:21.477 [2024-11-29 07:41:11.210025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.477 [2024-11-29 07:41:11.210040] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:21.477 [2024-11-29 07:41:11.210049] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.477 [2024-11-29 07:41:11.210418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.477 [2024-11-29 07:41:11.210450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:21.477 [2024-11-29 07:41:11.210511] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:21.477 [2024-11-29 07:41:11.210531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:21.477 [2024-11-29 07:41:11.210640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:21.477 [2024-11-29 07:41:11.210650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:21.477 [2024-11-29 07:41:11.210894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:21.477 [2024-11-29 07:41:11.211050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:21.477 [2024-11-29 07:41:11.211062] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:21.477 [2024-11-29 07:41:11.211216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.477 pt3 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.477 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.477 "name": "raid_bdev1", 00:09:21.477 "uuid": "246c4f83-05e5-40d1-8797-271c4c1feca3", 00:09:21.477 "strip_size_kb": 64, 00:09:21.477 "state": "online", 00:09:21.477 "raid_level": "raid0", 00:09:21.477 "superblock": true, 00:09:21.477 "num_base_bdevs": 3, 00:09:21.477 "num_base_bdevs_discovered": 3, 00:09:21.477 "num_base_bdevs_operational": 3, 00:09:21.477 "base_bdevs_list": [ 00:09:21.477 { 00:09:21.477 "name": "pt1", 00:09:21.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.477 "is_configured": true, 00:09:21.477 "data_offset": 2048, 00:09:21.477 "data_size": 63488 00:09:21.478 }, 00:09:21.478 { 00:09:21.478 "name": "pt2", 00:09:21.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.478 "is_configured": true, 00:09:21.478 "data_offset": 2048, 00:09:21.478 "data_size": 63488 00:09:21.478 }, 00:09:21.478 { 00:09:21.478 "name": "pt3", 00:09:21.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.478 "is_configured": true, 00:09:21.478 "data_offset": 2048, 00:09:21.478 "data_size": 63488 00:09:21.478 } 00:09:21.478 ] 00:09:21.478 }' 00:09:21.478 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.478 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.737 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:21.737 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:21.737 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.737 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.737 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.737 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.737 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.737 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.737 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.737 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.737 [2024-11-29 07:41:11.653549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.737 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.737 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.737 "name": "raid_bdev1", 00:09:21.737 "aliases": [ 00:09:21.737 "246c4f83-05e5-40d1-8797-271c4c1feca3" 00:09:21.737 ], 00:09:21.737 "product_name": "Raid Volume", 00:09:21.737 "block_size": 512, 00:09:21.737 "num_blocks": 190464, 00:09:21.737 "uuid": "246c4f83-05e5-40d1-8797-271c4c1feca3", 00:09:21.737 "assigned_rate_limits": { 00:09:21.737 "rw_ios_per_sec": 0, 00:09:21.737 "rw_mbytes_per_sec": 0, 00:09:21.737 "r_mbytes_per_sec": 0, 00:09:21.737 "w_mbytes_per_sec": 0 00:09:21.737 }, 00:09:21.737 "claimed": false, 00:09:21.737 "zoned": false, 00:09:21.737 "supported_io_types": { 00:09:21.737 "read": true, 00:09:21.737 "write": true, 00:09:21.737 "unmap": true, 00:09:21.737 "flush": true, 00:09:21.737 "reset": true, 00:09:21.737 "nvme_admin": false, 00:09:21.737 "nvme_io": false, 00:09:21.737 "nvme_io_md": false, 00:09:21.737 "write_zeroes": true, 00:09:21.737 "zcopy": false, 00:09:21.737 "get_zone_info": false, 00:09:21.737 "zone_management": false, 00:09:21.737 "zone_append": false, 00:09:21.737 "compare": false, 00:09:21.737 "compare_and_write": false, 00:09:21.737 "abort": false, 00:09:21.737 "seek_hole": false, 00:09:21.737 "seek_data": false, 00:09:21.737 "copy": false, 00:09:21.737 "nvme_iov_md": false 00:09:21.737 }, 00:09:21.737 "memory_domains": [ 00:09:21.737 { 00:09:21.737 "dma_device_id": "system", 00:09:21.737 "dma_device_type": 1 00:09:21.737 }, 00:09:21.737 { 00:09:21.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.737 "dma_device_type": 2 00:09:21.737 }, 00:09:21.737 { 00:09:21.737 "dma_device_id": "system", 00:09:21.737 "dma_device_type": 1 00:09:21.737 }, 00:09:21.737 { 00:09:21.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.737 "dma_device_type": 2 00:09:21.737 }, 00:09:21.737 { 00:09:21.737 "dma_device_id": "system", 00:09:21.737 "dma_device_type": 1 00:09:21.737 }, 00:09:21.737 { 00:09:21.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.737 "dma_device_type": 2 00:09:21.737 } 00:09:21.737 ], 00:09:21.737 "driver_specific": { 00:09:21.737 "raid": { 00:09:21.737 "uuid": "246c4f83-05e5-40d1-8797-271c4c1feca3", 00:09:21.737 "strip_size_kb": 64, 00:09:21.737 "state": "online", 00:09:21.737 "raid_level": "raid0", 00:09:21.737 "superblock": true, 00:09:21.737 "num_base_bdevs": 3, 00:09:21.737 "num_base_bdevs_discovered": 3, 00:09:21.737 "num_base_bdevs_operational": 3, 00:09:21.737 "base_bdevs_list": [ 00:09:21.737 { 00:09:21.737 "name": "pt1", 00:09:21.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.737 "is_configured": true, 00:09:21.737 "data_offset": 2048, 00:09:21.737 "data_size": 63488 00:09:21.737 }, 00:09:21.737 { 00:09:21.737 "name": "pt2", 00:09:21.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.737 "is_configured": true, 00:09:21.737 "data_offset": 2048, 00:09:21.737 "data_size": 63488 00:09:21.737 }, 00:09:21.737 { 00:09:21.737 "name": "pt3", 00:09:21.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.737 "is_configured": true, 00:09:21.738 "data_offset": 2048, 00:09:21.738 "data_size": 63488 00:09:21.738 } 00:09:21.738 ] 00:09:21.738 } 00:09:21.738 } 00:09:21.738 }' 00:09:21.738 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.996 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:21.996 pt2 00:09:21.996 pt3' 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.997 [2024-11-29 07:41:11.905020] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 246c4f83-05e5-40d1-8797-271c4c1feca3 '!=' 246c4f83-05e5-40d1-8797-271c4c1feca3 ']' 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64878 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64878 ']' 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64878 00:09:21.997 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:22.256 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.256 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64878 00:09:22.256 killing process with pid 64878 00:09:22.256 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.256 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.256 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64878' 00:09:22.256 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64878 00:09:22.256 [2024-11-29 07:41:11.972826] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.256 [2024-11-29 07:41:11.972917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.256 [2024-11-29 07:41:11.972973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.256 [2024-11-29 07:41:11.972984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:22.256 07:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64878 00:09:22.515 [2024-11-29 07:41:12.261592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.451 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:23.451 00:09:23.452 real 0m5.077s 00:09:23.452 user 0m7.322s 00:09:23.452 sys 0m0.786s 00:09:23.452 07:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.452 07:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.452 ************************************ 00:09:23.452 END TEST raid_superblock_test 00:09:23.452 ************************************ 00:09:23.712 07:41:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:23.712 07:41:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:23.712 07:41:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.712 07:41:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.712 ************************************ 00:09:23.712 START TEST raid_read_error_test 00:09:23.712 ************************************ 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rOX6srJUQa 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65126 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65126 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65126 ']' 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.712 07:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.712 [2024-11-29 07:41:13.515207] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:23.712 [2024-11-29 07:41:13.515332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65126 ] 00:09:23.971 [2024-11-29 07:41:13.689477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.971 [2024-11-29 07:41:13.799093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.230 [2024-11-29 07:41:13.994345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.230 [2024-11-29 07:41:13.994418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.489 BaseBdev1_malloc 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.489 true 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.489 [2024-11-29 07:41:14.392454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:24.489 [2024-11-29 07:41:14.392509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.489 [2024-11-29 07:41:14.392547] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:24.489 [2024-11-29 07:41:14.392557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.489 [2024-11-29 07:41:14.394614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.489 [2024-11-29 07:41:14.394666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:24.489 BaseBdev1 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.489 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.748 BaseBdev2_malloc 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.748 true 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.748 [2024-11-29 07:41:14.450893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:24.748 [2024-11-29 07:41:14.450946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.748 [2024-11-29 07:41:14.450963] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:24.748 [2024-11-29 07:41:14.450974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.748 [2024-11-29 07:41:14.453091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.748 [2024-11-29 07:41:14.453135] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:24.748 BaseBdev2 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.748 BaseBdev3_malloc 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.748 true 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:24.748 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.749 [2024-11-29 07:41:14.520084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:24.749 [2024-11-29 07:41:14.520149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.749 [2024-11-29 07:41:14.520166] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:24.749 [2024-11-29 07:41:14.520176] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.749 [2024-11-29 07:41:14.522288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.749 [2024-11-29 07:41:14.522327] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:24.749 BaseBdev3 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.749 [2024-11-29 07:41:14.528159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.749 [2024-11-29 07:41:14.529897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.749 [2024-11-29 07:41:14.529975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.749 [2024-11-29 07:41:14.530174] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:24.749 [2024-11-29 07:41:14.530191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:24.749 [2024-11-29 07:41:14.530433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:24.749 [2024-11-29 07:41:14.530605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:24.749 [2024-11-29 07:41:14.530626] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:24.749 [2024-11-29 07:41:14.530779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.749 "name": "raid_bdev1", 00:09:24.749 "uuid": "c932cc01-cb9e-4ca5-acac-e8040f4b64fe", 00:09:24.749 "strip_size_kb": 64, 00:09:24.749 "state": "online", 00:09:24.749 "raid_level": "raid0", 00:09:24.749 "superblock": true, 00:09:24.749 "num_base_bdevs": 3, 00:09:24.749 "num_base_bdevs_discovered": 3, 00:09:24.749 "num_base_bdevs_operational": 3, 00:09:24.749 "base_bdevs_list": [ 00:09:24.749 { 00:09:24.749 "name": "BaseBdev1", 00:09:24.749 "uuid": "f49b00a1-07b4-5ab7-8cae-fc6692f5acbd", 00:09:24.749 "is_configured": true, 00:09:24.749 "data_offset": 2048, 00:09:24.749 "data_size": 63488 00:09:24.749 }, 00:09:24.749 { 00:09:24.749 "name": "BaseBdev2", 00:09:24.749 "uuid": "90796733-cd25-5461-8675-7740e2e9c1ca", 00:09:24.749 "is_configured": true, 00:09:24.749 "data_offset": 2048, 00:09:24.749 "data_size": 63488 00:09:24.749 }, 00:09:24.749 { 00:09:24.749 "name": "BaseBdev3", 00:09:24.749 "uuid": "4b928303-18c9-5f74-929d-c5aac16280f7", 00:09:24.749 "is_configured": true, 00:09:24.749 "data_offset": 2048, 00:09:24.749 "data_size": 63488 00:09:24.749 } 00:09:24.749 ] 00:09:24.749 }' 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.749 07:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.318 07:41:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:25.318 07:41:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:25.318 [2024-11-29 07:41:15.100576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.259 "name": "raid_bdev1", 00:09:26.259 "uuid": "c932cc01-cb9e-4ca5-acac-e8040f4b64fe", 00:09:26.259 "strip_size_kb": 64, 00:09:26.259 "state": "online", 00:09:26.259 "raid_level": "raid0", 00:09:26.259 "superblock": true, 00:09:26.259 "num_base_bdevs": 3, 00:09:26.259 "num_base_bdevs_discovered": 3, 00:09:26.259 "num_base_bdevs_operational": 3, 00:09:26.259 "base_bdevs_list": [ 00:09:26.259 { 00:09:26.259 "name": "BaseBdev1", 00:09:26.259 "uuid": "f49b00a1-07b4-5ab7-8cae-fc6692f5acbd", 00:09:26.259 "is_configured": true, 00:09:26.259 "data_offset": 2048, 00:09:26.259 "data_size": 63488 00:09:26.259 }, 00:09:26.259 { 00:09:26.259 "name": "BaseBdev2", 00:09:26.259 "uuid": "90796733-cd25-5461-8675-7740e2e9c1ca", 00:09:26.259 "is_configured": true, 00:09:26.259 "data_offset": 2048, 00:09:26.259 "data_size": 63488 00:09:26.259 }, 00:09:26.259 { 00:09:26.259 "name": "BaseBdev3", 00:09:26.259 "uuid": "4b928303-18c9-5f74-929d-c5aac16280f7", 00:09:26.259 "is_configured": true, 00:09:26.259 "data_offset": 2048, 00:09:26.259 "data_size": 63488 00:09:26.259 } 00:09:26.259 ] 00:09:26.259 }' 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.259 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.520 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:26.520 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.520 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.520 [2024-11-29 07:41:16.431934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.520 [2024-11-29 07:41:16.431969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.520 [2024-11-29 07:41:16.434662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.520 [2024-11-29 07:41:16.434708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.520 [2024-11-29 07:41:16.434744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.520 [2024-11-29 07:41:16.434753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:26.520 { 00:09:26.520 "results": [ 00:09:26.520 { 00:09:26.520 "job": "raid_bdev1", 00:09:26.520 "core_mask": "0x1", 00:09:26.520 "workload": "randrw", 00:09:26.520 "percentage": 50, 00:09:26.520 "status": "finished", 00:09:26.520 "queue_depth": 1, 00:09:26.520 "io_size": 131072, 00:09:26.520 "runtime": 1.332306, 00:09:26.520 "iops": 15789.16555205786, 00:09:26.520 "mibps": 1973.6456940072326, 00:09:26.520 "io_failed": 1, 00:09:26.520 "io_timeout": 0, 00:09:26.520 "avg_latency_us": 87.86876339524892, 00:09:26.520 "min_latency_us": 21.687336244541484, 00:09:26.520 "max_latency_us": 1323.598253275109 00:09:26.520 } 00:09:26.520 ], 00:09:26.520 "core_count": 1 00:09:26.520 } 00:09:26.520 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.520 07:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65126 00:09:26.520 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65126 ']' 00:09:26.520 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65126 00:09:26.520 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:26.520 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.520 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65126 00:09:26.781 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.781 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.781 killing process with pid 65126 00:09:26.781 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65126' 00:09:26.781 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65126 00:09:26.781 [2024-11-29 07:41:16.479468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.781 07:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65126 00:09:26.781 [2024-11-29 07:41:16.700945] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.163 07:41:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rOX6srJUQa 00:09:28.163 07:41:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:28.163 07:41:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:28.163 07:41:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:28.163 07:41:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:28.164 07:41:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.164 07:41:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:28.164 07:41:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:28.164 00:09:28.164 real 0m4.448s 00:09:28.164 user 0m5.285s 00:09:28.164 sys 0m0.562s 00:09:28.164 07:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.164 07:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.164 ************************************ 00:09:28.164 END TEST raid_read_error_test 00:09:28.164 ************************************ 00:09:28.164 07:41:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:28.164 07:41:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:28.164 07:41:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.164 07:41:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.164 ************************************ 00:09:28.164 START TEST raid_write_error_test 00:09:28.164 ************************************ 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GK2arLCbu4 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65277 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65277 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65277 ']' 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.164 07:41:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.164 [2024-11-29 07:41:18.032104] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:28.164 [2024-11-29 07:41:18.032226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65277 ] 00:09:28.424 [2024-11-29 07:41:18.202667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.424 [2024-11-29 07:41:18.311619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.684 [2024-11-29 07:41:18.502519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.684 [2024-11-29 07:41:18.502560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.944 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.944 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:28.944 07:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.944 07:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:28.944 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.944 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 BaseBdev1_malloc 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 true 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 [2024-11-29 07:41:18.931570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:29.204 [2024-11-29 07:41:18.931624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.204 [2024-11-29 07:41:18.931644] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:29.204 [2024-11-29 07:41:18.931655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.204 [2024-11-29 07:41:18.933750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.204 [2024-11-29 07:41:18.933788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:29.204 BaseBdev1 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 BaseBdev2_malloc 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 true 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.204 07:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 [2024-11-29 07:41:18.999608] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:29.204 [2024-11-29 07:41:18.999660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.204 [2024-11-29 07:41:18.999677] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:29.204 [2024-11-29 07:41:18.999688] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.204 [2024-11-29 07:41:19.001902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.204 [2024-11-29 07:41:19.001941] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:29.204 BaseBdev2 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 BaseBdev3_malloc 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 true 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.204 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 [2024-11-29 07:41:19.080502] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:29.204 [2024-11-29 07:41:19.080553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.204 [2024-11-29 07:41:19.080569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:29.204 [2024-11-29 07:41:19.080579] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.204 [2024-11-29 07:41:19.082669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.204 [2024-11-29 07:41:19.082710] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:29.204 BaseBdev3 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.205 [2024-11-29 07:41:19.092561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.205 [2024-11-29 07:41:19.094334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.205 [2024-11-29 07:41:19.094408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.205 [2024-11-29 07:41:19.094642] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:29.205 [2024-11-29 07:41:19.094665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:29.205 [2024-11-29 07:41:19.094895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:29.205 [2024-11-29 07:41:19.095060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:29.205 [2024-11-29 07:41:19.095080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:29.205 [2024-11-29 07:41:19.095243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.205 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.205 "name": "raid_bdev1", 00:09:29.205 "uuid": "a7e4c826-67a6-4c55-a47f-79ab85af4240", 00:09:29.205 "strip_size_kb": 64, 00:09:29.205 "state": "online", 00:09:29.205 "raid_level": "raid0", 00:09:29.205 "superblock": true, 00:09:29.205 "num_base_bdevs": 3, 00:09:29.205 "num_base_bdevs_discovered": 3, 00:09:29.205 "num_base_bdevs_operational": 3, 00:09:29.205 "base_bdevs_list": [ 00:09:29.205 { 00:09:29.205 "name": "BaseBdev1", 00:09:29.205 "uuid": "cc761223-055b-54d2-a1b1-c7bdcdf9ea36", 00:09:29.205 "is_configured": true, 00:09:29.205 "data_offset": 2048, 00:09:29.205 "data_size": 63488 00:09:29.205 }, 00:09:29.205 { 00:09:29.205 "name": "BaseBdev2", 00:09:29.205 "uuid": "eb4b3fd5-b930-53ca-8c58-ed882e82b5ee", 00:09:29.205 "is_configured": true, 00:09:29.205 "data_offset": 2048, 00:09:29.205 "data_size": 63488 00:09:29.205 }, 00:09:29.205 { 00:09:29.205 "name": "BaseBdev3", 00:09:29.205 "uuid": "8f6719a3-56c2-526c-a5b6-d8e5242dc278", 00:09:29.205 "is_configured": true, 00:09:29.205 "data_offset": 2048, 00:09:29.205 "data_size": 63488 00:09:29.205 } 00:09:29.205 ] 00:09:29.205 }' 00:09:29.465 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.465 07:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.726 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:29.726 07:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:29.726 [2024-11-29 07:41:19.585074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.668 "name": "raid_bdev1", 00:09:30.668 "uuid": "a7e4c826-67a6-4c55-a47f-79ab85af4240", 00:09:30.668 "strip_size_kb": 64, 00:09:30.668 "state": "online", 00:09:30.668 "raid_level": "raid0", 00:09:30.668 "superblock": true, 00:09:30.668 "num_base_bdevs": 3, 00:09:30.668 "num_base_bdevs_discovered": 3, 00:09:30.668 "num_base_bdevs_operational": 3, 00:09:30.668 "base_bdevs_list": [ 00:09:30.668 { 00:09:30.668 "name": "BaseBdev1", 00:09:30.668 "uuid": "cc761223-055b-54d2-a1b1-c7bdcdf9ea36", 00:09:30.668 "is_configured": true, 00:09:30.668 "data_offset": 2048, 00:09:30.668 "data_size": 63488 00:09:30.668 }, 00:09:30.668 { 00:09:30.668 "name": "BaseBdev2", 00:09:30.668 "uuid": "eb4b3fd5-b930-53ca-8c58-ed882e82b5ee", 00:09:30.668 "is_configured": true, 00:09:30.668 "data_offset": 2048, 00:09:30.668 "data_size": 63488 00:09:30.668 }, 00:09:30.668 { 00:09:30.668 "name": "BaseBdev3", 00:09:30.668 "uuid": "8f6719a3-56c2-526c-a5b6-d8e5242dc278", 00:09:30.668 "is_configured": true, 00:09:30.668 "data_offset": 2048, 00:09:30.668 "data_size": 63488 00:09:30.668 } 00:09:30.668 ] 00:09:30.668 }' 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.668 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.238 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:31.238 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.238 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.238 [2024-11-29 07:41:20.952641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.238 [2024-11-29 07:41:20.952673] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.238 [2024-11-29 07:41:20.955342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.238 [2024-11-29 07:41:20.955388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.238 [2024-11-29 07:41:20.955427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.238 [2024-11-29 07:41:20.955466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:31.238 { 00:09:31.238 "results": [ 00:09:31.238 { 00:09:31.238 "job": "raid_bdev1", 00:09:31.238 "core_mask": "0x1", 00:09:31.238 "workload": "randrw", 00:09:31.238 "percentage": 50, 00:09:31.238 "status": "finished", 00:09:31.238 "queue_depth": 1, 00:09:31.238 "io_size": 131072, 00:09:31.238 "runtime": 1.36856, 00:09:31.238 "iops": 15877.27246156544, 00:09:31.238 "mibps": 1984.65905769568, 00:09:31.238 "io_failed": 1, 00:09:31.238 "io_timeout": 0, 00:09:31.238 "avg_latency_us": 87.3766418751771, 00:09:31.238 "min_latency_us": 18.892576419213974, 00:09:31.238 "max_latency_us": 1373.6803493449781 00:09:31.238 } 00:09:31.238 ], 00:09:31.238 "core_count": 1 00:09:31.238 } 00:09:31.238 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.238 07:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65277 00:09:31.238 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65277 ']' 00:09:31.238 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65277 00:09:31.238 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:31.238 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.238 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65277 00:09:31.238 07:41:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.238 07:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.238 killing process with pid 65277 00:09:31.238 07:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65277' 00:09:31.238 07:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65277 00:09:31.238 [2024-11-29 07:41:21.002734] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.238 07:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65277 00:09:31.498 [2024-11-29 07:41:21.230744] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.437 07:41:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:32.437 07:41:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GK2arLCbu4 00:09:32.437 07:41:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:32.437 07:41:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:32.437 07:41:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:32.437 07:41:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.437 07:41:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.437 07:41:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:32.437 00:09:32.437 real 0m4.444s 00:09:32.437 user 0m5.261s 00:09:32.437 sys 0m0.539s 00:09:32.437 07:41:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.437 07:41:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.437 ************************************ 00:09:32.437 END TEST raid_write_error_test 00:09:32.437 ************************************ 00:09:32.698 07:41:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:32.698 07:41:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:32.698 07:41:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:32.698 07:41:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.698 07:41:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.698 ************************************ 00:09:32.698 START TEST raid_state_function_test 00:09:32.698 ************************************ 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65415 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65415' 00:09:32.698 Process raid pid: 65415 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65415 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65415 ']' 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.698 07:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.698 [2024-11-29 07:41:22.542329] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:32.698 [2024-11-29 07:41:22.542537] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.958 [2024-11-29 07:41:22.713547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.958 [2024-11-29 07:41:22.823125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.218 [2024-11-29 07:41:23.021097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.218 [2024-11-29 07:41:23.021145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.478 [2024-11-29 07:41:23.369980] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.478 [2024-11-29 07:41:23.370113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.478 [2024-11-29 07:41:23.370146] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.478 [2024-11-29 07:41:23.370156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.478 [2024-11-29 07:41:23.370162] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.478 [2024-11-29 07:41:23.370171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.478 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.738 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.738 "name": "Existed_Raid", 00:09:33.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.738 "strip_size_kb": 64, 00:09:33.738 "state": "configuring", 00:09:33.738 "raid_level": "concat", 00:09:33.738 "superblock": false, 00:09:33.738 "num_base_bdevs": 3, 00:09:33.738 "num_base_bdevs_discovered": 0, 00:09:33.738 "num_base_bdevs_operational": 3, 00:09:33.738 "base_bdevs_list": [ 00:09:33.738 { 00:09:33.738 "name": "BaseBdev1", 00:09:33.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.738 "is_configured": false, 00:09:33.738 "data_offset": 0, 00:09:33.738 "data_size": 0 00:09:33.738 }, 00:09:33.738 { 00:09:33.738 "name": "BaseBdev2", 00:09:33.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.738 "is_configured": false, 00:09:33.738 "data_offset": 0, 00:09:33.738 "data_size": 0 00:09:33.738 }, 00:09:33.738 { 00:09:33.738 "name": "BaseBdev3", 00:09:33.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.738 "is_configured": false, 00:09:33.738 "data_offset": 0, 00:09:33.738 "data_size": 0 00:09:33.738 } 00:09:33.738 ] 00:09:33.738 }' 00:09:33.738 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.738 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.000 [2024-11-29 07:41:23.789209] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.000 [2024-11-29 07:41:23.789298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.000 [2024-11-29 07:41:23.801183] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.000 [2024-11-29 07:41:23.801259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.000 [2024-11-29 07:41:23.801304] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.000 [2024-11-29 07:41:23.801326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.000 [2024-11-29 07:41:23.801352] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.000 [2024-11-29 07:41:23.801375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.000 [2024-11-29 07:41:23.847252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.000 BaseBdev1 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.000 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.000 [ 00:09:34.000 { 00:09:34.000 "name": "BaseBdev1", 00:09:34.000 "aliases": [ 00:09:34.000 "ad8617e4-ebb9-4be9-ac18-c963b128ea72" 00:09:34.000 ], 00:09:34.000 "product_name": "Malloc disk", 00:09:34.000 "block_size": 512, 00:09:34.000 "num_blocks": 65536, 00:09:34.000 "uuid": "ad8617e4-ebb9-4be9-ac18-c963b128ea72", 00:09:34.000 "assigned_rate_limits": { 00:09:34.000 "rw_ios_per_sec": 0, 00:09:34.000 "rw_mbytes_per_sec": 0, 00:09:34.000 "r_mbytes_per_sec": 0, 00:09:34.000 "w_mbytes_per_sec": 0 00:09:34.000 }, 00:09:34.000 "claimed": true, 00:09:34.000 "claim_type": "exclusive_write", 00:09:34.000 "zoned": false, 00:09:34.000 "supported_io_types": { 00:09:34.000 "read": true, 00:09:34.000 "write": true, 00:09:34.000 "unmap": true, 00:09:34.000 "flush": true, 00:09:34.000 "reset": true, 00:09:34.000 "nvme_admin": false, 00:09:34.000 "nvme_io": false, 00:09:34.000 "nvme_io_md": false, 00:09:34.000 "write_zeroes": true, 00:09:34.000 "zcopy": true, 00:09:34.000 "get_zone_info": false, 00:09:34.000 "zone_management": false, 00:09:34.000 "zone_append": false, 00:09:34.000 "compare": false, 00:09:34.000 "compare_and_write": false, 00:09:34.000 "abort": true, 00:09:34.000 "seek_hole": false, 00:09:34.000 "seek_data": false, 00:09:34.000 "copy": true, 00:09:34.000 "nvme_iov_md": false 00:09:34.001 }, 00:09:34.001 "memory_domains": [ 00:09:34.001 { 00:09:34.001 "dma_device_id": "system", 00:09:34.001 "dma_device_type": 1 00:09:34.001 }, 00:09:34.001 { 00:09:34.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.001 "dma_device_type": 2 00:09:34.001 } 00:09:34.001 ], 00:09:34.001 "driver_specific": {} 00:09:34.001 } 00:09:34.001 ] 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.001 "name": "Existed_Raid", 00:09:34.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.001 "strip_size_kb": 64, 00:09:34.001 "state": "configuring", 00:09:34.001 "raid_level": "concat", 00:09:34.001 "superblock": false, 00:09:34.001 "num_base_bdevs": 3, 00:09:34.001 "num_base_bdevs_discovered": 1, 00:09:34.001 "num_base_bdevs_operational": 3, 00:09:34.001 "base_bdevs_list": [ 00:09:34.001 { 00:09:34.001 "name": "BaseBdev1", 00:09:34.001 "uuid": "ad8617e4-ebb9-4be9-ac18-c963b128ea72", 00:09:34.001 "is_configured": true, 00:09:34.001 "data_offset": 0, 00:09:34.001 "data_size": 65536 00:09:34.001 }, 00:09:34.001 { 00:09:34.001 "name": "BaseBdev2", 00:09:34.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.001 "is_configured": false, 00:09:34.001 "data_offset": 0, 00:09:34.001 "data_size": 0 00:09:34.001 }, 00:09:34.001 { 00:09:34.001 "name": "BaseBdev3", 00:09:34.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.001 "is_configured": false, 00:09:34.001 "data_offset": 0, 00:09:34.001 "data_size": 0 00:09:34.001 } 00:09:34.001 ] 00:09:34.001 }' 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.001 07:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.567 [2024-11-29 07:41:24.326468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.567 [2024-11-29 07:41:24.326516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.567 [2024-11-29 07:41:24.338488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.567 [2024-11-29 07:41:24.340308] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.567 [2024-11-29 07:41:24.340383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.567 [2024-11-29 07:41:24.340412] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.567 [2024-11-29 07:41:24.340434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.567 "name": "Existed_Raid", 00:09:34.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.567 "strip_size_kb": 64, 00:09:34.567 "state": "configuring", 00:09:34.567 "raid_level": "concat", 00:09:34.567 "superblock": false, 00:09:34.567 "num_base_bdevs": 3, 00:09:34.567 "num_base_bdevs_discovered": 1, 00:09:34.567 "num_base_bdevs_operational": 3, 00:09:34.567 "base_bdevs_list": [ 00:09:34.567 { 00:09:34.567 "name": "BaseBdev1", 00:09:34.567 "uuid": "ad8617e4-ebb9-4be9-ac18-c963b128ea72", 00:09:34.567 "is_configured": true, 00:09:34.567 "data_offset": 0, 00:09:34.567 "data_size": 65536 00:09:34.567 }, 00:09:34.567 { 00:09:34.567 "name": "BaseBdev2", 00:09:34.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.567 "is_configured": false, 00:09:34.567 "data_offset": 0, 00:09:34.567 "data_size": 0 00:09:34.567 }, 00:09:34.567 { 00:09:34.567 "name": "BaseBdev3", 00:09:34.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.567 "is_configured": false, 00:09:34.567 "data_offset": 0, 00:09:34.567 "data_size": 0 00:09:34.567 } 00:09:34.567 ] 00:09:34.567 }' 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.567 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.901 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.901 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.901 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.161 [2024-11-29 07:41:24.831341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.161 BaseBdev2 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.161 [ 00:09:35.161 { 00:09:35.161 "name": "BaseBdev2", 00:09:35.161 "aliases": [ 00:09:35.161 "027b5cb6-0ffc-4e0f-8131-65aa78eb525a" 00:09:35.161 ], 00:09:35.161 "product_name": "Malloc disk", 00:09:35.161 "block_size": 512, 00:09:35.161 "num_blocks": 65536, 00:09:35.161 "uuid": "027b5cb6-0ffc-4e0f-8131-65aa78eb525a", 00:09:35.161 "assigned_rate_limits": { 00:09:35.161 "rw_ios_per_sec": 0, 00:09:35.161 "rw_mbytes_per_sec": 0, 00:09:35.161 "r_mbytes_per_sec": 0, 00:09:35.161 "w_mbytes_per_sec": 0 00:09:35.161 }, 00:09:35.161 "claimed": true, 00:09:35.161 "claim_type": "exclusive_write", 00:09:35.161 "zoned": false, 00:09:35.161 "supported_io_types": { 00:09:35.161 "read": true, 00:09:35.161 "write": true, 00:09:35.161 "unmap": true, 00:09:35.161 "flush": true, 00:09:35.161 "reset": true, 00:09:35.161 "nvme_admin": false, 00:09:35.161 "nvme_io": false, 00:09:35.161 "nvme_io_md": false, 00:09:35.161 "write_zeroes": true, 00:09:35.161 "zcopy": true, 00:09:35.161 "get_zone_info": false, 00:09:35.161 "zone_management": false, 00:09:35.161 "zone_append": false, 00:09:35.161 "compare": false, 00:09:35.161 "compare_and_write": false, 00:09:35.161 "abort": true, 00:09:35.161 "seek_hole": false, 00:09:35.161 "seek_data": false, 00:09:35.161 "copy": true, 00:09:35.161 "nvme_iov_md": false 00:09:35.161 }, 00:09:35.161 "memory_domains": [ 00:09:35.161 { 00:09:35.161 "dma_device_id": "system", 00:09:35.161 "dma_device_type": 1 00:09:35.161 }, 00:09:35.161 { 00:09:35.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.161 "dma_device_type": 2 00:09:35.161 } 00:09:35.161 ], 00:09:35.161 "driver_specific": {} 00:09:35.161 } 00:09:35.161 ] 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.161 "name": "Existed_Raid", 00:09:35.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.161 "strip_size_kb": 64, 00:09:35.161 "state": "configuring", 00:09:35.161 "raid_level": "concat", 00:09:35.161 "superblock": false, 00:09:35.161 "num_base_bdevs": 3, 00:09:35.161 "num_base_bdevs_discovered": 2, 00:09:35.161 "num_base_bdevs_operational": 3, 00:09:35.161 "base_bdevs_list": [ 00:09:35.161 { 00:09:35.161 "name": "BaseBdev1", 00:09:35.161 "uuid": "ad8617e4-ebb9-4be9-ac18-c963b128ea72", 00:09:35.161 "is_configured": true, 00:09:35.161 "data_offset": 0, 00:09:35.161 "data_size": 65536 00:09:35.161 }, 00:09:35.161 { 00:09:35.161 "name": "BaseBdev2", 00:09:35.161 "uuid": "027b5cb6-0ffc-4e0f-8131-65aa78eb525a", 00:09:35.161 "is_configured": true, 00:09:35.161 "data_offset": 0, 00:09:35.161 "data_size": 65536 00:09:35.161 }, 00:09:35.161 { 00:09:35.161 "name": "BaseBdev3", 00:09:35.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.161 "is_configured": false, 00:09:35.161 "data_offset": 0, 00:09:35.161 "data_size": 0 00:09:35.161 } 00:09:35.161 ] 00:09:35.161 }' 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.161 07:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.421 [2024-11-29 07:41:25.342074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.421 [2024-11-29 07:41:25.342229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.421 [2024-11-29 07:41:25.342247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:35.421 [2024-11-29 07:41:25.342535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:35.421 [2024-11-29 07:41:25.342710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.421 [2024-11-29 07:41:25.342720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:35.421 [2024-11-29 07:41:25.343000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.421 BaseBdev3 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.421 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.680 [ 00:09:35.680 { 00:09:35.680 "name": "BaseBdev3", 00:09:35.680 "aliases": [ 00:09:35.680 "7718318c-eb3b-4363-81aa-504d69349772" 00:09:35.680 ], 00:09:35.680 "product_name": "Malloc disk", 00:09:35.680 "block_size": 512, 00:09:35.680 "num_blocks": 65536, 00:09:35.680 "uuid": "7718318c-eb3b-4363-81aa-504d69349772", 00:09:35.680 "assigned_rate_limits": { 00:09:35.680 "rw_ios_per_sec": 0, 00:09:35.680 "rw_mbytes_per_sec": 0, 00:09:35.680 "r_mbytes_per_sec": 0, 00:09:35.680 "w_mbytes_per_sec": 0 00:09:35.680 }, 00:09:35.680 "claimed": true, 00:09:35.680 "claim_type": "exclusive_write", 00:09:35.680 "zoned": false, 00:09:35.680 "supported_io_types": { 00:09:35.680 "read": true, 00:09:35.680 "write": true, 00:09:35.680 "unmap": true, 00:09:35.680 "flush": true, 00:09:35.680 "reset": true, 00:09:35.680 "nvme_admin": false, 00:09:35.680 "nvme_io": false, 00:09:35.680 "nvme_io_md": false, 00:09:35.680 "write_zeroes": true, 00:09:35.680 "zcopy": true, 00:09:35.680 "get_zone_info": false, 00:09:35.680 "zone_management": false, 00:09:35.680 "zone_append": false, 00:09:35.680 "compare": false, 00:09:35.680 "compare_and_write": false, 00:09:35.680 "abort": true, 00:09:35.680 "seek_hole": false, 00:09:35.680 "seek_data": false, 00:09:35.680 "copy": true, 00:09:35.680 "nvme_iov_md": false 00:09:35.680 }, 00:09:35.680 "memory_domains": [ 00:09:35.680 { 00:09:35.680 "dma_device_id": "system", 00:09:35.680 "dma_device_type": 1 00:09:35.680 }, 00:09:35.680 { 00:09:35.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.680 "dma_device_type": 2 00:09:35.680 } 00:09:35.680 ], 00:09:35.680 "driver_specific": {} 00:09:35.680 } 00:09:35.680 ] 00:09:35.680 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.680 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.680 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.680 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.680 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.681 "name": "Existed_Raid", 00:09:35.681 "uuid": "00b63cac-a4c4-47b8-8956-6d141105f74f", 00:09:35.681 "strip_size_kb": 64, 00:09:35.681 "state": "online", 00:09:35.681 "raid_level": "concat", 00:09:35.681 "superblock": false, 00:09:35.681 "num_base_bdevs": 3, 00:09:35.681 "num_base_bdevs_discovered": 3, 00:09:35.681 "num_base_bdevs_operational": 3, 00:09:35.681 "base_bdevs_list": [ 00:09:35.681 { 00:09:35.681 "name": "BaseBdev1", 00:09:35.681 "uuid": "ad8617e4-ebb9-4be9-ac18-c963b128ea72", 00:09:35.681 "is_configured": true, 00:09:35.681 "data_offset": 0, 00:09:35.681 "data_size": 65536 00:09:35.681 }, 00:09:35.681 { 00:09:35.681 "name": "BaseBdev2", 00:09:35.681 "uuid": "027b5cb6-0ffc-4e0f-8131-65aa78eb525a", 00:09:35.681 "is_configured": true, 00:09:35.681 "data_offset": 0, 00:09:35.681 "data_size": 65536 00:09:35.681 }, 00:09:35.681 { 00:09:35.681 "name": "BaseBdev3", 00:09:35.681 "uuid": "7718318c-eb3b-4363-81aa-504d69349772", 00:09:35.681 "is_configured": true, 00:09:35.681 "data_offset": 0, 00:09:35.681 "data_size": 65536 00:09:35.681 } 00:09:35.681 ] 00:09:35.681 }' 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.681 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.941 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.941 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.941 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.941 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.941 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.941 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.941 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.941 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.941 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.941 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.941 [2024-11-29 07:41:25.821616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.941 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.941 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.941 "name": "Existed_Raid", 00:09:35.941 "aliases": [ 00:09:35.941 "00b63cac-a4c4-47b8-8956-6d141105f74f" 00:09:35.941 ], 00:09:35.941 "product_name": "Raid Volume", 00:09:35.941 "block_size": 512, 00:09:35.941 "num_blocks": 196608, 00:09:35.941 "uuid": "00b63cac-a4c4-47b8-8956-6d141105f74f", 00:09:35.941 "assigned_rate_limits": { 00:09:35.941 "rw_ios_per_sec": 0, 00:09:35.941 "rw_mbytes_per_sec": 0, 00:09:35.941 "r_mbytes_per_sec": 0, 00:09:35.941 "w_mbytes_per_sec": 0 00:09:35.941 }, 00:09:35.941 "claimed": false, 00:09:35.941 "zoned": false, 00:09:35.941 "supported_io_types": { 00:09:35.941 "read": true, 00:09:35.941 "write": true, 00:09:35.941 "unmap": true, 00:09:35.941 "flush": true, 00:09:35.941 "reset": true, 00:09:35.941 "nvme_admin": false, 00:09:35.941 "nvme_io": false, 00:09:35.941 "nvme_io_md": false, 00:09:35.941 "write_zeroes": true, 00:09:35.941 "zcopy": false, 00:09:35.941 "get_zone_info": false, 00:09:35.941 "zone_management": false, 00:09:35.941 "zone_append": false, 00:09:35.941 "compare": false, 00:09:35.941 "compare_and_write": false, 00:09:35.941 "abort": false, 00:09:35.941 "seek_hole": false, 00:09:35.941 "seek_data": false, 00:09:35.941 "copy": false, 00:09:35.941 "nvme_iov_md": false 00:09:35.941 }, 00:09:35.941 "memory_domains": [ 00:09:35.941 { 00:09:35.941 "dma_device_id": "system", 00:09:35.941 "dma_device_type": 1 00:09:35.941 }, 00:09:35.941 { 00:09:35.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.941 "dma_device_type": 2 00:09:35.941 }, 00:09:35.941 { 00:09:35.941 "dma_device_id": "system", 00:09:35.941 "dma_device_type": 1 00:09:35.941 }, 00:09:35.941 { 00:09:35.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.941 "dma_device_type": 2 00:09:35.941 }, 00:09:35.941 { 00:09:35.941 "dma_device_id": "system", 00:09:35.941 "dma_device_type": 1 00:09:35.941 }, 00:09:35.941 { 00:09:35.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.941 "dma_device_type": 2 00:09:35.941 } 00:09:35.941 ], 00:09:35.941 "driver_specific": { 00:09:35.941 "raid": { 00:09:35.941 "uuid": "00b63cac-a4c4-47b8-8956-6d141105f74f", 00:09:35.941 "strip_size_kb": 64, 00:09:35.941 "state": "online", 00:09:35.941 "raid_level": "concat", 00:09:35.941 "superblock": false, 00:09:35.941 "num_base_bdevs": 3, 00:09:35.941 "num_base_bdevs_discovered": 3, 00:09:35.941 "num_base_bdevs_operational": 3, 00:09:35.941 "base_bdevs_list": [ 00:09:35.941 { 00:09:35.941 "name": "BaseBdev1", 00:09:35.941 "uuid": "ad8617e4-ebb9-4be9-ac18-c963b128ea72", 00:09:35.941 "is_configured": true, 00:09:35.941 "data_offset": 0, 00:09:35.941 "data_size": 65536 00:09:35.941 }, 00:09:35.941 { 00:09:35.941 "name": "BaseBdev2", 00:09:35.941 "uuid": "027b5cb6-0ffc-4e0f-8131-65aa78eb525a", 00:09:35.941 "is_configured": true, 00:09:35.941 "data_offset": 0, 00:09:35.941 "data_size": 65536 00:09:35.941 }, 00:09:35.941 { 00:09:35.941 "name": "BaseBdev3", 00:09:35.941 "uuid": "7718318c-eb3b-4363-81aa-504d69349772", 00:09:35.941 "is_configured": true, 00:09:35.941 "data_offset": 0, 00:09:35.941 "data_size": 65536 00:09:35.941 } 00:09:35.942 ] 00:09:35.942 } 00:09:35.942 } 00:09:35.942 }' 00:09:35.942 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:36.202 BaseBdev2 00:09:36.202 BaseBdev3' 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.202 07:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.202 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.202 [2024-11-29 07:41:26.064914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.202 [2024-11-29 07:41:26.064942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.202 [2024-11-29 07:41:26.064999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.463 "name": "Existed_Raid", 00:09:36.463 "uuid": "00b63cac-a4c4-47b8-8956-6d141105f74f", 00:09:36.463 "strip_size_kb": 64, 00:09:36.463 "state": "offline", 00:09:36.463 "raid_level": "concat", 00:09:36.463 "superblock": false, 00:09:36.463 "num_base_bdevs": 3, 00:09:36.463 "num_base_bdevs_discovered": 2, 00:09:36.463 "num_base_bdevs_operational": 2, 00:09:36.463 "base_bdevs_list": [ 00:09:36.463 { 00:09:36.463 "name": null, 00:09:36.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.463 "is_configured": false, 00:09:36.463 "data_offset": 0, 00:09:36.463 "data_size": 65536 00:09:36.463 }, 00:09:36.463 { 00:09:36.463 "name": "BaseBdev2", 00:09:36.463 "uuid": "027b5cb6-0ffc-4e0f-8131-65aa78eb525a", 00:09:36.463 "is_configured": true, 00:09:36.463 "data_offset": 0, 00:09:36.463 "data_size": 65536 00:09:36.463 }, 00:09:36.463 { 00:09:36.463 "name": "BaseBdev3", 00:09:36.463 "uuid": "7718318c-eb3b-4363-81aa-504d69349772", 00:09:36.463 "is_configured": true, 00:09:36.463 "data_offset": 0, 00:09:36.463 "data_size": 65536 00:09:36.463 } 00:09:36.463 ] 00:09:36.463 }' 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.463 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.723 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.723 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.723 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.723 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.723 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.723 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.723 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.723 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.723 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.723 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.723 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.723 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.723 [2024-11-29 07:41:26.582269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.983 [2024-11-29 07:41:26.731148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.983 [2024-11-29 07:41:26.731244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.983 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.984 BaseBdev2 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.984 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.244 [ 00:09:37.244 { 00:09:37.244 "name": "BaseBdev2", 00:09:37.244 "aliases": [ 00:09:37.244 "148147de-991f-4a01-ade2-5538527640b3" 00:09:37.244 ], 00:09:37.244 "product_name": "Malloc disk", 00:09:37.244 "block_size": 512, 00:09:37.244 "num_blocks": 65536, 00:09:37.244 "uuid": "148147de-991f-4a01-ade2-5538527640b3", 00:09:37.244 "assigned_rate_limits": { 00:09:37.244 "rw_ios_per_sec": 0, 00:09:37.244 "rw_mbytes_per_sec": 0, 00:09:37.244 "r_mbytes_per_sec": 0, 00:09:37.244 "w_mbytes_per_sec": 0 00:09:37.244 }, 00:09:37.244 "claimed": false, 00:09:37.244 "zoned": false, 00:09:37.244 "supported_io_types": { 00:09:37.244 "read": true, 00:09:37.244 "write": true, 00:09:37.244 "unmap": true, 00:09:37.244 "flush": true, 00:09:37.244 "reset": true, 00:09:37.244 "nvme_admin": false, 00:09:37.245 "nvme_io": false, 00:09:37.245 "nvme_io_md": false, 00:09:37.245 "write_zeroes": true, 00:09:37.245 "zcopy": true, 00:09:37.245 "get_zone_info": false, 00:09:37.245 "zone_management": false, 00:09:37.245 "zone_append": false, 00:09:37.245 "compare": false, 00:09:37.245 "compare_and_write": false, 00:09:37.245 "abort": true, 00:09:37.245 "seek_hole": false, 00:09:37.245 "seek_data": false, 00:09:37.245 "copy": true, 00:09:37.245 "nvme_iov_md": false 00:09:37.245 }, 00:09:37.245 "memory_domains": [ 00:09:37.245 { 00:09:37.245 "dma_device_id": "system", 00:09:37.245 "dma_device_type": 1 00:09:37.245 }, 00:09:37.245 { 00:09:37.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.245 "dma_device_type": 2 00:09:37.245 } 00:09:37.245 ], 00:09:37.245 "driver_specific": {} 00:09:37.245 } 00:09:37.245 ] 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.245 BaseBdev3 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.245 07:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.245 [ 00:09:37.245 { 00:09:37.245 "name": "BaseBdev3", 00:09:37.245 "aliases": [ 00:09:37.245 "821b6f9d-19d1-41a7-b02f-2b5fedc87b55" 00:09:37.245 ], 00:09:37.245 "product_name": "Malloc disk", 00:09:37.245 "block_size": 512, 00:09:37.245 "num_blocks": 65536, 00:09:37.245 "uuid": "821b6f9d-19d1-41a7-b02f-2b5fedc87b55", 00:09:37.245 "assigned_rate_limits": { 00:09:37.245 "rw_ios_per_sec": 0, 00:09:37.245 "rw_mbytes_per_sec": 0, 00:09:37.245 "r_mbytes_per_sec": 0, 00:09:37.245 "w_mbytes_per_sec": 0 00:09:37.245 }, 00:09:37.245 "claimed": false, 00:09:37.245 "zoned": false, 00:09:37.245 "supported_io_types": { 00:09:37.245 "read": true, 00:09:37.245 "write": true, 00:09:37.245 "unmap": true, 00:09:37.245 "flush": true, 00:09:37.245 "reset": true, 00:09:37.245 "nvme_admin": false, 00:09:37.245 "nvme_io": false, 00:09:37.245 "nvme_io_md": false, 00:09:37.245 "write_zeroes": true, 00:09:37.245 "zcopy": true, 00:09:37.245 "get_zone_info": false, 00:09:37.245 "zone_management": false, 00:09:37.245 "zone_append": false, 00:09:37.245 "compare": false, 00:09:37.245 "compare_and_write": false, 00:09:37.245 "abort": true, 00:09:37.245 "seek_hole": false, 00:09:37.245 "seek_data": false, 00:09:37.245 "copy": true, 00:09:37.245 "nvme_iov_md": false 00:09:37.245 }, 00:09:37.245 "memory_domains": [ 00:09:37.245 { 00:09:37.245 "dma_device_id": "system", 00:09:37.245 "dma_device_type": 1 00:09:37.245 }, 00:09:37.245 { 00:09:37.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.245 "dma_device_type": 2 00:09:37.245 } 00:09:37.245 ], 00:09:37.245 "driver_specific": {} 00:09:37.245 } 00:09:37.245 ] 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.245 [2024-11-29 07:41:27.033966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.245 [2024-11-29 07:41:27.034046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.245 [2024-11-29 07:41:27.034102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.245 [2024-11-29 07:41:27.035823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.245 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.246 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.246 "name": "Existed_Raid", 00:09:37.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.246 "strip_size_kb": 64, 00:09:37.246 "state": "configuring", 00:09:37.246 "raid_level": "concat", 00:09:37.246 "superblock": false, 00:09:37.246 "num_base_bdevs": 3, 00:09:37.246 "num_base_bdevs_discovered": 2, 00:09:37.246 "num_base_bdevs_operational": 3, 00:09:37.246 "base_bdevs_list": [ 00:09:37.246 { 00:09:37.246 "name": "BaseBdev1", 00:09:37.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.246 "is_configured": false, 00:09:37.246 "data_offset": 0, 00:09:37.246 "data_size": 0 00:09:37.246 }, 00:09:37.246 { 00:09:37.246 "name": "BaseBdev2", 00:09:37.246 "uuid": "148147de-991f-4a01-ade2-5538527640b3", 00:09:37.246 "is_configured": true, 00:09:37.246 "data_offset": 0, 00:09:37.246 "data_size": 65536 00:09:37.246 }, 00:09:37.246 { 00:09:37.246 "name": "BaseBdev3", 00:09:37.246 "uuid": "821b6f9d-19d1-41a7-b02f-2b5fedc87b55", 00:09:37.246 "is_configured": true, 00:09:37.246 "data_offset": 0, 00:09:37.246 "data_size": 65536 00:09:37.246 } 00:09:37.246 ] 00:09:37.246 }' 00:09:37.246 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.246 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.817 [2024-11-29 07:41:27.485227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.817 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.818 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.818 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.818 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.818 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.818 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.818 "name": "Existed_Raid", 00:09:37.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.818 "strip_size_kb": 64, 00:09:37.818 "state": "configuring", 00:09:37.818 "raid_level": "concat", 00:09:37.818 "superblock": false, 00:09:37.818 "num_base_bdevs": 3, 00:09:37.818 "num_base_bdevs_discovered": 1, 00:09:37.818 "num_base_bdevs_operational": 3, 00:09:37.818 "base_bdevs_list": [ 00:09:37.818 { 00:09:37.818 "name": "BaseBdev1", 00:09:37.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.818 "is_configured": false, 00:09:37.818 "data_offset": 0, 00:09:37.818 "data_size": 0 00:09:37.818 }, 00:09:37.818 { 00:09:37.818 "name": null, 00:09:37.818 "uuid": "148147de-991f-4a01-ade2-5538527640b3", 00:09:37.818 "is_configured": false, 00:09:37.818 "data_offset": 0, 00:09:37.818 "data_size": 65536 00:09:37.818 }, 00:09:37.818 { 00:09:37.818 "name": "BaseBdev3", 00:09:37.818 "uuid": "821b6f9d-19d1-41a7-b02f-2b5fedc87b55", 00:09:37.818 "is_configured": true, 00:09:37.818 "data_offset": 0, 00:09:37.818 "data_size": 65536 00:09:37.818 } 00:09:37.818 ] 00:09:37.818 }' 00:09:37.818 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.818 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.078 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.078 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.078 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.078 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:38.078 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.078 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:38.078 07:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.078 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.078 07:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.078 [2024-11-29 07:41:28.015601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.078 BaseBdev1 00:09:38.078 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.078 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:38.078 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:38.078 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.078 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:38.078 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.078 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.078 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.078 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.078 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.339 [ 00:09:38.339 { 00:09:38.339 "name": "BaseBdev1", 00:09:38.339 "aliases": [ 00:09:38.339 "e423cbaa-6f5c-495d-abe1-2d87f9a952de" 00:09:38.339 ], 00:09:38.339 "product_name": "Malloc disk", 00:09:38.339 "block_size": 512, 00:09:38.339 "num_blocks": 65536, 00:09:38.339 "uuid": "e423cbaa-6f5c-495d-abe1-2d87f9a952de", 00:09:38.339 "assigned_rate_limits": { 00:09:38.339 "rw_ios_per_sec": 0, 00:09:38.339 "rw_mbytes_per_sec": 0, 00:09:38.339 "r_mbytes_per_sec": 0, 00:09:38.339 "w_mbytes_per_sec": 0 00:09:38.339 }, 00:09:38.339 "claimed": true, 00:09:38.339 "claim_type": "exclusive_write", 00:09:38.339 "zoned": false, 00:09:38.339 "supported_io_types": { 00:09:38.339 "read": true, 00:09:38.339 "write": true, 00:09:38.339 "unmap": true, 00:09:38.339 "flush": true, 00:09:38.339 "reset": true, 00:09:38.339 "nvme_admin": false, 00:09:38.339 "nvme_io": false, 00:09:38.339 "nvme_io_md": false, 00:09:38.339 "write_zeroes": true, 00:09:38.339 "zcopy": true, 00:09:38.339 "get_zone_info": false, 00:09:38.339 "zone_management": false, 00:09:38.339 "zone_append": false, 00:09:38.339 "compare": false, 00:09:38.339 "compare_and_write": false, 00:09:38.339 "abort": true, 00:09:38.339 "seek_hole": false, 00:09:38.339 "seek_data": false, 00:09:38.339 "copy": true, 00:09:38.339 "nvme_iov_md": false 00:09:38.339 }, 00:09:38.339 "memory_domains": [ 00:09:38.339 { 00:09:38.339 "dma_device_id": "system", 00:09:38.339 "dma_device_type": 1 00:09:38.339 }, 00:09:38.339 { 00:09:38.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.339 "dma_device_type": 2 00:09:38.339 } 00:09:38.339 ], 00:09:38.339 "driver_specific": {} 00:09:38.339 } 00:09:38.339 ] 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.339 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.339 "name": "Existed_Raid", 00:09:38.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.339 "strip_size_kb": 64, 00:09:38.339 "state": "configuring", 00:09:38.339 "raid_level": "concat", 00:09:38.339 "superblock": false, 00:09:38.339 "num_base_bdevs": 3, 00:09:38.339 "num_base_bdevs_discovered": 2, 00:09:38.339 "num_base_bdevs_operational": 3, 00:09:38.339 "base_bdevs_list": [ 00:09:38.339 { 00:09:38.339 "name": "BaseBdev1", 00:09:38.339 "uuid": "e423cbaa-6f5c-495d-abe1-2d87f9a952de", 00:09:38.339 "is_configured": true, 00:09:38.340 "data_offset": 0, 00:09:38.340 "data_size": 65536 00:09:38.340 }, 00:09:38.340 { 00:09:38.340 "name": null, 00:09:38.340 "uuid": "148147de-991f-4a01-ade2-5538527640b3", 00:09:38.340 "is_configured": false, 00:09:38.340 "data_offset": 0, 00:09:38.340 "data_size": 65536 00:09:38.340 }, 00:09:38.340 { 00:09:38.340 "name": "BaseBdev3", 00:09:38.340 "uuid": "821b6f9d-19d1-41a7-b02f-2b5fedc87b55", 00:09:38.340 "is_configured": true, 00:09:38.340 "data_offset": 0, 00:09:38.340 "data_size": 65536 00:09:38.340 } 00:09:38.340 ] 00:09:38.340 }' 00:09:38.340 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.340 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.600 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.600 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.600 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.600 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.600 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.861 [2024-11-29 07:41:28.550755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.861 "name": "Existed_Raid", 00:09:38.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.861 "strip_size_kb": 64, 00:09:38.861 "state": "configuring", 00:09:38.861 "raid_level": "concat", 00:09:38.861 "superblock": false, 00:09:38.861 "num_base_bdevs": 3, 00:09:38.861 "num_base_bdevs_discovered": 1, 00:09:38.861 "num_base_bdevs_operational": 3, 00:09:38.861 "base_bdevs_list": [ 00:09:38.861 { 00:09:38.861 "name": "BaseBdev1", 00:09:38.861 "uuid": "e423cbaa-6f5c-495d-abe1-2d87f9a952de", 00:09:38.861 "is_configured": true, 00:09:38.861 "data_offset": 0, 00:09:38.861 "data_size": 65536 00:09:38.861 }, 00:09:38.861 { 00:09:38.861 "name": null, 00:09:38.861 "uuid": "148147de-991f-4a01-ade2-5538527640b3", 00:09:38.861 "is_configured": false, 00:09:38.861 "data_offset": 0, 00:09:38.861 "data_size": 65536 00:09:38.861 }, 00:09:38.861 { 00:09:38.861 "name": null, 00:09:38.861 "uuid": "821b6f9d-19d1-41a7-b02f-2b5fedc87b55", 00:09:38.861 "is_configured": false, 00:09:38.861 "data_offset": 0, 00:09:38.861 "data_size": 65536 00:09:38.861 } 00:09:38.861 ] 00:09:38.861 }' 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.861 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.121 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.121 07:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.121 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.121 07:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.121 [2024-11-29 07:41:29.041932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.121 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.381 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.381 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.381 "name": "Existed_Raid", 00:09:39.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.381 "strip_size_kb": 64, 00:09:39.381 "state": "configuring", 00:09:39.381 "raid_level": "concat", 00:09:39.381 "superblock": false, 00:09:39.381 "num_base_bdevs": 3, 00:09:39.381 "num_base_bdevs_discovered": 2, 00:09:39.381 "num_base_bdevs_operational": 3, 00:09:39.381 "base_bdevs_list": [ 00:09:39.381 { 00:09:39.381 "name": "BaseBdev1", 00:09:39.381 "uuid": "e423cbaa-6f5c-495d-abe1-2d87f9a952de", 00:09:39.381 "is_configured": true, 00:09:39.381 "data_offset": 0, 00:09:39.381 "data_size": 65536 00:09:39.382 }, 00:09:39.382 { 00:09:39.382 "name": null, 00:09:39.382 "uuid": "148147de-991f-4a01-ade2-5538527640b3", 00:09:39.382 "is_configured": false, 00:09:39.382 "data_offset": 0, 00:09:39.382 "data_size": 65536 00:09:39.382 }, 00:09:39.382 { 00:09:39.382 "name": "BaseBdev3", 00:09:39.382 "uuid": "821b6f9d-19d1-41a7-b02f-2b5fedc87b55", 00:09:39.382 "is_configured": true, 00:09:39.382 "data_offset": 0, 00:09:39.382 "data_size": 65536 00:09:39.382 } 00:09:39.382 ] 00:09:39.382 }' 00:09:39.382 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.382 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.641 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.641 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.642 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.642 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.642 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.642 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:39.642 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.642 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.642 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.642 [2024-11-29 07:41:29.525183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.901 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.901 "name": "Existed_Raid", 00:09:39.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.901 "strip_size_kb": 64, 00:09:39.901 "state": "configuring", 00:09:39.901 "raid_level": "concat", 00:09:39.901 "superblock": false, 00:09:39.901 "num_base_bdevs": 3, 00:09:39.901 "num_base_bdevs_discovered": 1, 00:09:39.901 "num_base_bdevs_operational": 3, 00:09:39.901 "base_bdevs_list": [ 00:09:39.901 { 00:09:39.901 "name": null, 00:09:39.901 "uuid": "e423cbaa-6f5c-495d-abe1-2d87f9a952de", 00:09:39.901 "is_configured": false, 00:09:39.901 "data_offset": 0, 00:09:39.901 "data_size": 65536 00:09:39.901 }, 00:09:39.901 { 00:09:39.901 "name": null, 00:09:39.901 "uuid": "148147de-991f-4a01-ade2-5538527640b3", 00:09:39.901 "is_configured": false, 00:09:39.901 "data_offset": 0, 00:09:39.901 "data_size": 65536 00:09:39.901 }, 00:09:39.902 { 00:09:39.902 "name": "BaseBdev3", 00:09:39.902 "uuid": "821b6f9d-19d1-41a7-b02f-2b5fedc87b55", 00:09:39.902 "is_configured": true, 00:09:39.902 "data_offset": 0, 00:09:39.902 "data_size": 65536 00:09:39.902 } 00:09:39.902 ] 00:09:39.902 }' 00:09:39.902 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.902 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.162 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.162 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.162 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.162 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:40.162 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.162 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:40.162 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:40.162 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.162 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.422 [2024-11-29 07:41:30.108224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.422 "name": "Existed_Raid", 00:09:40.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.422 "strip_size_kb": 64, 00:09:40.422 "state": "configuring", 00:09:40.422 "raid_level": "concat", 00:09:40.422 "superblock": false, 00:09:40.422 "num_base_bdevs": 3, 00:09:40.422 "num_base_bdevs_discovered": 2, 00:09:40.422 "num_base_bdevs_operational": 3, 00:09:40.422 "base_bdevs_list": [ 00:09:40.422 { 00:09:40.422 "name": null, 00:09:40.422 "uuid": "e423cbaa-6f5c-495d-abe1-2d87f9a952de", 00:09:40.422 "is_configured": false, 00:09:40.422 "data_offset": 0, 00:09:40.422 "data_size": 65536 00:09:40.422 }, 00:09:40.422 { 00:09:40.422 "name": "BaseBdev2", 00:09:40.422 "uuid": "148147de-991f-4a01-ade2-5538527640b3", 00:09:40.422 "is_configured": true, 00:09:40.422 "data_offset": 0, 00:09:40.422 "data_size": 65536 00:09:40.422 }, 00:09:40.422 { 00:09:40.422 "name": "BaseBdev3", 00:09:40.422 "uuid": "821b6f9d-19d1-41a7-b02f-2b5fedc87b55", 00:09:40.422 "is_configured": true, 00:09:40.422 "data_offset": 0, 00:09:40.422 "data_size": 65536 00:09:40.422 } 00:09:40.422 ] 00:09:40.422 }' 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.422 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e423cbaa-6f5c-495d-abe1-2d87f9a952de 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.682 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.942 [2024-11-29 07:41:30.659697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:40.942 [2024-11-29 07:41:30.659743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:40.942 [2024-11-29 07:41:30.659751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:40.942 [2024-11-29 07:41:30.659998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:40.942 [2024-11-29 07:41:30.660193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:40.942 [2024-11-29 07:41:30.660205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:40.942 [2024-11-29 07:41:30.660479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.942 NewBaseBdev 00:09:40.942 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.942 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:40.942 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:40.942 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.942 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:40.942 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.942 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.942 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.942 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.943 [ 00:09:40.943 { 00:09:40.943 "name": "NewBaseBdev", 00:09:40.943 "aliases": [ 00:09:40.943 "e423cbaa-6f5c-495d-abe1-2d87f9a952de" 00:09:40.943 ], 00:09:40.943 "product_name": "Malloc disk", 00:09:40.943 "block_size": 512, 00:09:40.943 "num_blocks": 65536, 00:09:40.943 "uuid": "e423cbaa-6f5c-495d-abe1-2d87f9a952de", 00:09:40.943 "assigned_rate_limits": { 00:09:40.943 "rw_ios_per_sec": 0, 00:09:40.943 "rw_mbytes_per_sec": 0, 00:09:40.943 "r_mbytes_per_sec": 0, 00:09:40.943 "w_mbytes_per_sec": 0 00:09:40.943 }, 00:09:40.943 "claimed": true, 00:09:40.943 "claim_type": "exclusive_write", 00:09:40.943 "zoned": false, 00:09:40.943 "supported_io_types": { 00:09:40.943 "read": true, 00:09:40.943 "write": true, 00:09:40.943 "unmap": true, 00:09:40.943 "flush": true, 00:09:40.943 "reset": true, 00:09:40.943 "nvme_admin": false, 00:09:40.943 "nvme_io": false, 00:09:40.943 "nvme_io_md": false, 00:09:40.943 "write_zeroes": true, 00:09:40.943 "zcopy": true, 00:09:40.943 "get_zone_info": false, 00:09:40.943 "zone_management": false, 00:09:40.943 "zone_append": false, 00:09:40.943 "compare": false, 00:09:40.943 "compare_and_write": false, 00:09:40.943 "abort": true, 00:09:40.943 "seek_hole": false, 00:09:40.943 "seek_data": false, 00:09:40.943 "copy": true, 00:09:40.943 "nvme_iov_md": false 00:09:40.943 }, 00:09:40.943 "memory_domains": [ 00:09:40.943 { 00:09:40.943 "dma_device_id": "system", 00:09:40.943 "dma_device_type": 1 00:09:40.943 }, 00:09:40.943 { 00:09:40.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.943 "dma_device_type": 2 00:09:40.943 } 00:09:40.943 ], 00:09:40.943 "driver_specific": {} 00:09:40.943 } 00:09:40.943 ] 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.943 "name": "Existed_Raid", 00:09:40.943 "uuid": "5c50fa30-f676-4c9d-bdcc-eefd23dabc95", 00:09:40.943 "strip_size_kb": 64, 00:09:40.943 "state": "online", 00:09:40.943 "raid_level": "concat", 00:09:40.943 "superblock": false, 00:09:40.943 "num_base_bdevs": 3, 00:09:40.943 "num_base_bdevs_discovered": 3, 00:09:40.943 "num_base_bdevs_operational": 3, 00:09:40.943 "base_bdevs_list": [ 00:09:40.943 { 00:09:40.943 "name": "NewBaseBdev", 00:09:40.943 "uuid": "e423cbaa-6f5c-495d-abe1-2d87f9a952de", 00:09:40.943 "is_configured": true, 00:09:40.943 "data_offset": 0, 00:09:40.943 "data_size": 65536 00:09:40.943 }, 00:09:40.943 { 00:09:40.943 "name": "BaseBdev2", 00:09:40.943 "uuid": "148147de-991f-4a01-ade2-5538527640b3", 00:09:40.943 "is_configured": true, 00:09:40.943 "data_offset": 0, 00:09:40.943 "data_size": 65536 00:09:40.943 }, 00:09:40.943 { 00:09:40.943 "name": "BaseBdev3", 00:09:40.943 "uuid": "821b6f9d-19d1-41a7-b02f-2b5fedc87b55", 00:09:40.943 "is_configured": true, 00:09:40.943 "data_offset": 0, 00:09:40.943 "data_size": 65536 00:09:40.943 } 00:09:40.943 ] 00:09:40.943 }' 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.943 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.203 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.203 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.203 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.203 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.203 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.203 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.203 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.463 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.463 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.463 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.463 [2024-11-29 07:41:31.155249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.463 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.463 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.463 "name": "Existed_Raid", 00:09:41.463 "aliases": [ 00:09:41.463 "5c50fa30-f676-4c9d-bdcc-eefd23dabc95" 00:09:41.463 ], 00:09:41.463 "product_name": "Raid Volume", 00:09:41.463 "block_size": 512, 00:09:41.463 "num_blocks": 196608, 00:09:41.463 "uuid": "5c50fa30-f676-4c9d-bdcc-eefd23dabc95", 00:09:41.463 "assigned_rate_limits": { 00:09:41.463 "rw_ios_per_sec": 0, 00:09:41.463 "rw_mbytes_per_sec": 0, 00:09:41.463 "r_mbytes_per_sec": 0, 00:09:41.463 "w_mbytes_per_sec": 0 00:09:41.463 }, 00:09:41.463 "claimed": false, 00:09:41.463 "zoned": false, 00:09:41.463 "supported_io_types": { 00:09:41.463 "read": true, 00:09:41.463 "write": true, 00:09:41.463 "unmap": true, 00:09:41.463 "flush": true, 00:09:41.463 "reset": true, 00:09:41.463 "nvme_admin": false, 00:09:41.463 "nvme_io": false, 00:09:41.463 "nvme_io_md": false, 00:09:41.463 "write_zeroes": true, 00:09:41.463 "zcopy": false, 00:09:41.463 "get_zone_info": false, 00:09:41.463 "zone_management": false, 00:09:41.463 "zone_append": false, 00:09:41.463 "compare": false, 00:09:41.463 "compare_and_write": false, 00:09:41.463 "abort": false, 00:09:41.463 "seek_hole": false, 00:09:41.463 "seek_data": false, 00:09:41.463 "copy": false, 00:09:41.463 "nvme_iov_md": false 00:09:41.463 }, 00:09:41.463 "memory_domains": [ 00:09:41.463 { 00:09:41.463 "dma_device_id": "system", 00:09:41.463 "dma_device_type": 1 00:09:41.463 }, 00:09:41.463 { 00:09:41.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.463 "dma_device_type": 2 00:09:41.463 }, 00:09:41.463 { 00:09:41.463 "dma_device_id": "system", 00:09:41.463 "dma_device_type": 1 00:09:41.463 }, 00:09:41.463 { 00:09:41.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.463 "dma_device_type": 2 00:09:41.463 }, 00:09:41.463 { 00:09:41.463 "dma_device_id": "system", 00:09:41.463 "dma_device_type": 1 00:09:41.463 }, 00:09:41.463 { 00:09:41.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.463 "dma_device_type": 2 00:09:41.463 } 00:09:41.463 ], 00:09:41.463 "driver_specific": { 00:09:41.463 "raid": { 00:09:41.463 "uuid": "5c50fa30-f676-4c9d-bdcc-eefd23dabc95", 00:09:41.463 "strip_size_kb": 64, 00:09:41.463 "state": "online", 00:09:41.463 "raid_level": "concat", 00:09:41.463 "superblock": false, 00:09:41.463 "num_base_bdevs": 3, 00:09:41.463 "num_base_bdevs_discovered": 3, 00:09:41.463 "num_base_bdevs_operational": 3, 00:09:41.463 "base_bdevs_list": [ 00:09:41.463 { 00:09:41.463 "name": "NewBaseBdev", 00:09:41.463 "uuid": "e423cbaa-6f5c-495d-abe1-2d87f9a952de", 00:09:41.463 "is_configured": true, 00:09:41.463 "data_offset": 0, 00:09:41.463 "data_size": 65536 00:09:41.463 }, 00:09:41.463 { 00:09:41.463 "name": "BaseBdev2", 00:09:41.463 "uuid": "148147de-991f-4a01-ade2-5538527640b3", 00:09:41.463 "is_configured": true, 00:09:41.463 "data_offset": 0, 00:09:41.463 "data_size": 65536 00:09:41.463 }, 00:09:41.463 { 00:09:41.463 "name": "BaseBdev3", 00:09:41.463 "uuid": "821b6f9d-19d1-41a7-b02f-2b5fedc87b55", 00:09:41.463 "is_configured": true, 00:09:41.463 "data_offset": 0, 00:09:41.463 "data_size": 65536 00:09:41.463 } 00:09:41.463 ] 00:09:41.463 } 00:09:41.463 } 00:09:41.464 }' 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:41.464 BaseBdev2 00:09:41.464 BaseBdev3' 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.464 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.464 [2024-11-29 07:41:31.406513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.464 [2024-11-29 07:41:31.406587] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.464 [2024-11-29 07:41:31.406696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.464 [2024-11-29 07:41:31.406753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.464 [2024-11-29 07:41:31.406766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:41.724 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.724 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65415 00:09:41.724 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65415 ']' 00:09:41.724 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65415 00:09:41.724 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:41.724 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.724 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65415 00:09:41.724 killing process with pid 65415 00:09:41.724 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.724 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.724 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65415' 00:09:41.724 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65415 00:09:41.724 [2024-11-29 07:41:31.455330] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.724 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65415 00:09:41.984 [2024-11-29 07:41:31.743802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.925 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:42.925 00:09:42.925 real 0m10.389s 00:09:42.925 user 0m16.570s 00:09:42.925 sys 0m1.753s 00:09:42.925 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.926 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.926 ************************************ 00:09:42.926 END TEST raid_state_function_test 00:09:42.926 ************************************ 00:09:43.186 07:41:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:43.186 07:41:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:43.186 07:41:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.186 07:41:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.186 ************************************ 00:09:43.186 START TEST raid_state_function_test_sb 00:09:43.186 ************************************ 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:43.186 Process raid pid: 66031 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66031 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66031' 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66031 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66031 ']' 00:09:43.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.186 07:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.186 [2024-11-29 07:41:33.003414] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:43.186 [2024-11-29 07:41:33.003545] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.445 [2024-11-29 07:41:33.160080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.445 [2024-11-29 07:41:33.268432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.703 [2024-11-29 07:41:33.470251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.703 [2024-11-29 07:41:33.470289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.962 [2024-11-29 07:41:33.825001] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.962 [2024-11-29 07:41:33.825058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.962 [2024-11-29 07:41:33.825068] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.962 [2024-11-29 07:41:33.825093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.962 [2024-11-29 07:41:33.825099] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:43.962 [2024-11-29 07:41:33.825109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.962 "name": "Existed_Raid", 00:09:43.962 "uuid": "e3a703e3-af86-4e34-90d4-7aa368603f12", 00:09:43.962 "strip_size_kb": 64, 00:09:43.962 "state": "configuring", 00:09:43.962 "raid_level": "concat", 00:09:43.962 "superblock": true, 00:09:43.962 "num_base_bdevs": 3, 00:09:43.962 "num_base_bdevs_discovered": 0, 00:09:43.962 "num_base_bdevs_operational": 3, 00:09:43.962 "base_bdevs_list": [ 00:09:43.962 { 00:09:43.962 "name": "BaseBdev1", 00:09:43.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.962 "is_configured": false, 00:09:43.962 "data_offset": 0, 00:09:43.962 "data_size": 0 00:09:43.962 }, 00:09:43.962 { 00:09:43.962 "name": "BaseBdev2", 00:09:43.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.962 "is_configured": false, 00:09:43.962 "data_offset": 0, 00:09:43.962 "data_size": 0 00:09:43.962 }, 00:09:43.962 { 00:09:43.962 "name": "BaseBdev3", 00:09:43.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.962 "is_configured": false, 00:09:43.962 "data_offset": 0, 00:09:43.962 "data_size": 0 00:09:43.962 } 00:09:43.962 ] 00:09:43.962 }' 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.962 07:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.529 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.529 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.529 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.529 [2024-11-29 07:41:34.272187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.529 [2024-11-29 07:41:34.272288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:44.529 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.529 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:44.529 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.529 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.529 [2024-11-29 07:41:34.284168] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.529 [2024-11-29 07:41:34.284252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.529 [2024-11-29 07:41:34.284303] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.529 [2024-11-29 07:41:34.284330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.529 [2024-11-29 07:41:34.284361] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.529 [2024-11-29 07:41:34.284388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.529 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.529 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.529 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.529 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.530 [2024-11-29 07:41:34.330970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.530 BaseBdev1 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.530 [ 00:09:44.530 { 00:09:44.530 "name": "BaseBdev1", 00:09:44.530 "aliases": [ 00:09:44.530 "15418670-4e00-44d9-9cc9-da907a8de87f" 00:09:44.530 ], 00:09:44.530 "product_name": "Malloc disk", 00:09:44.530 "block_size": 512, 00:09:44.530 "num_blocks": 65536, 00:09:44.530 "uuid": "15418670-4e00-44d9-9cc9-da907a8de87f", 00:09:44.530 "assigned_rate_limits": { 00:09:44.530 "rw_ios_per_sec": 0, 00:09:44.530 "rw_mbytes_per_sec": 0, 00:09:44.530 "r_mbytes_per_sec": 0, 00:09:44.530 "w_mbytes_per_sec": 0 00:09:44.530 }, 00:09:44.530 "claimed": true, 00:09:44.530 "claim_type": "exclusive_write", 00:09:44.530 "zoned": false, 00:09:44.530 "supported_io_types": { 00:09:44.530 "read": true, 00:09:44.530 "write": true, 00:09:44.530 "unmap": true, 00:09:44.530 "flush": true, 00:09:44.530 "reset": true, 00:09:44.530 "nvme_admin": false, 00:09:44.530 "nvme_io": false, 00:09:44.530 "nvme_io_md": false, 00:09:44.530 "write_zeroes": true, 00:09:44.530 "zcopy": true, 00:09:44.530 "get_zone_info": false, 00:09:44.530 "zone_management": false, 00:09:44.530 "zone_append": false, 00:09:44.530 "compare": false, 00:09:44.530 "compare_and_write": false, 00:09:44.530 "abort": true, 00:09:44.530 "seek_hole": false, 00:09:44.530 "seek_data": false, 00:09:44.530 "copy": true, 00:09:44.530 "nvme_iov_md": false 00:09:44.530 }, 00:09:44.530 "memory_domains": [ 00:09:44.530 { 00:09:44.530 "dma_device_id": "system", 00:09:44.530 "dma_device_type": 1 00:09:44.530 }, 00:09:44.530 { 00:09:44.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.530 "dma_device_type": 2 00:09:44.530 } 00:09:44.530 ], 00:09:44.530 "driver_specific": {} 00:09:44.530 } 00:09:44.530 ] 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.530 "name": "Existed_Raid", 00:09:44.530 "uuid": "20bae5dd-cb52-4c70-aae5-ca5b2705d9a5", 00:09:44.530 "strip_size_kb": 64, 00:09:44.530 "state": "configuring", 00:09:44.530 "raid_level": "concat", 00:09:44.530 "superblock": true, 00:09:44.530 "num_base_bdevs": 3, 00:09:44.530 "num_base_bdevs_discovered": 1, 00:09:44.530 "num_base_bdevs_operational": 3, 00:09:44.530 "base_bdevs_list": [ 00:09:44.530 { 00:09:44.530 "name": "BaseBdev1", 00:09:44.530 "uuid": "15418670-4e00-44d9-9cc9-da907a8de87f", 00:09:44.530 "is_configured": true, 00:09:44.530 "data_offset": 2048, 00:09:44.530 "data_size": 63488 00:09:44.530 }, 00:09:44.530 { 00:09:44.530 "name": "BaseBdev2", 00:09:44.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.530 "is_configured": false, 00:09:44.530 "data_offset": 0, 00:09:44.530 "data_size": 0 00:09:44.530 }, 00:09:44.530 { 00:09:44.530 "name": "BaseBdev3", 00:09:44.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.530 "is_configured": false, 00:09:44.530 "data_offset": 0, 00:09:44.530 "data_size": 0 00:09:44.530 } 00:09:44.530 ] 00:09:44.530 }' 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.530 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.098 [2024-11-29 07:41:34.802231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.098 [2024-11-29 07:41:34.802287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.098 [2024-11-29 07:41:34.814263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.098 [2024-11-29 07:41:34.816135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.098 [2024-11-29 07:41:34.816178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.098 [2024-11-29 07:41:34.816189] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.098 [2024-11-29 07:41:34.816199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.098 "name": "Existed_Raid", 00:09:45.098 "uuid": "c18ef523-f02d-4b3f-ac67-a71f321f6964", 00:09:45.098 "strip_size_kb": 64, 00:09:45.098 "state": "configuring", 00:09:45.098 "raid_level": "concat", 00:09:45.098 "superblock": true, 00:09:45.098 "num_base_bdevs": 3, 00:09:45.098 "num_base_bdevs_discovered": 1, 00:09:45.098 "num_base_bdevs_operational": 3, 00:09:45.098 "base_bdevs_list": [ 00:09:45.098 { 00:09:45.098 "name": "BaseBdev1", 00:09:45.098 "uuid": "15418670-4e00-44d9-9cc9-da907a8de87f", 00:09:45.098 "is_configured": true, 00:09:45.098 "data_offset": 2048, 00:09:45.098 "data_size": 63488 00:09:45.098 }, 00:09:45.098 { 00:09:45.098 "name": "BaseBdev2", 00:09:45.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.098 "is_configured": false, 00:09:45.098 "data_offset": 0, 00:09:45.098 "data_size": 0 00:09:45.098 }, 00:09:45.098 { 00:09:45.098 "name": "BaseBdev3", 00:09:45.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.098 "is_configured": false, 00:09:45.098 "data_offset": 0, 00:09:45.098 "data_size": 0 00:09:45.098 } 00:09:45.098 ] 00:09:45.098 }' 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.098 07:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.357 [2024-11-29 07:41:35.274072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.357 BaseBdev2 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.357 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.357 [ 00:09:45.615 { 00:09:45.615 "name": "BaseBdev2", 00:09:45.615 "aliases": [ 00:09:45.615 "4ed312dc-4399-4de2-8b7f-3c0ea36792a8" 00:09:45.615 ], 00:09:45.615 "product_name": "Malloc disk", 00:09:45.615 "block_size": 512, 00:09:45.615 "num_blocks": 65536, 00:09:45.615 "uuid": "4ed312dc-4399-4de2-8b7f-3c0ea36792a8", 00:09:45.615 "assigned_rate_limits": { 00:09:45.615 "rw_ios_per_sec": 0, 00:09:45.615 "rw_mbytes_per_sec": 0, 00:09:45.615 "r_mbytes_per_sec": 0, 00:09:45.615 "w_mbytes_per_sec": 0 00:09:45.615 }, 00:09:45.615 "claimed": true, 00:09:45.615 "claim_type": "exclusive_write", 00:09:45.615 "zoned": false, 00:09:45.615 "supported_io_types": { 00:09:45.615 "read": true, 00:09:45.615 "write": true, 00:09:45.615 "unmap": true, 00:09:45.615 "flush": true, 00:09:45.615 "reset": true, 00:09:45.615 "nvme_admin": false, 00:09:45.615 "nvme_io": false, 00:09:45.615 "nvme_io_md": false, 00:09:45.615 "write_zeroes": true, 00:09:45.615 "zcopy": true, 00:09:45.615 "get_zone_info": false, 00:09:45.615 "zone_management": false, 00:09:45.615 "zone_append": false, 00:09:45.615 "compare": false, 00:09:45.615 "compare_and_write": false, 00:09:45.615 "abort": true, 00:09:45.615 "seek_hole": false, 00:09:45.615 "seek_data": false, 00:09:45.615 "copy": true, 00:09:45.615 "nvme_iov_md": false 00:09:45.615 }, 00:09:45.615 "memory_domains": [ 00:09:45.615 { 00:09:45.615 "dma_device_id": "system", 00:09:45.615 "dma_device_type": 1 00:09:45.615 }, 00:09:45.615 { 00:09:45.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.616 "dma_device_type": 2 00:09:45.616 } 00:09:45.616 ], 00:09:45.616 "driver_specific": {} 00:09:45.616 } 00:09:45.616 ] 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.616 "name": "Existed_Raid", 00:09:45.616 "uuid": "c18ef523-f02d-4b3f-ac67-a71f321f6964", 00:09:45.616 "strip_size_kb": 64, 00:09:45.616 "state": "configuring", 00:09:45.616 "raid_level": "concat", 00:09:45.616 "superblock": true, 00:09:45.616 "num_base_bdevs": 3, 00:09:45.616 "num_base_bdevs_discovered": 2, 00:09:45.616 "num_base_bdevs_operational": 3, 00:09:45.616 "base_bdevs_list": [ 00:09:45.616 { 00:09:45.616 "name": "BaseBdev1", 00:09:45.616 "uuid": "15418670-4e00-44d9-9cc9-da907a8de87f", 00:09:45.616 "is_configured": true, 00:09:45.616 "data_offset": 2048, 00:09:45.616 "data_size": 63488 00:09:45.616 }, 00:09:45.616 { 00:09:45.616 "name": "BaseBdev2", 00:09:45.616 "uuid": "4ed312dc-4399-4de2-8b7f-3c0ea36792a8", 00:09:45.616 "is_configured": true, 00:09:45.616 "data_offset": 2048, 00:09:45.616 "data_size": 63488 00:09:45.616 }, 00:09:45.616 { 00:09:45.616 "name": "BaseBdev3", 00:09:45.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.616 "is_configured": false, 00:09:45.616 "data_offset": 0, 00:09:45.616 "data_size": 0 00:09:45.616 } 00:09:45.616 ] 00:09:45.616 }' 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.616 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.874 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:45.874 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.874 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.875 [2024-11-29 07:41:35.773057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.875 [2024-11-29 07:41:35.773391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:45.875 [2024-11-29 07:41:35.773412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:45.875 [2024-11-29 07:41:35.773689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:45.875 [2024-11-29 07:41:35.773872] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:45.875 [2024-11-29 07:41:35.773883] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:45.875 BaseBdev3 00:09:45.875 [2024-11-29 07:41:35.774035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.875 [ 00:09:45.875 { 00:09:45.875 "name": "BaseBdev3", 00:09:45.875 "aliases": [ 00:09:45.875 "9a291e85-3408-457f-a7c5-6e96da546c7c" 00:09:45.875 ], 00:09:45.875 "product_name": "Malloc disk", 00:09:45.875 "block_size": 512, 00:09:45.875 "num_blocks": 65536, 00:09:45.875 "uuid": "9a291e85-3408-457f-a7c5-6e96da546c7c", 00:09:45.875 "assigned_rate_limits": { 00:09:45.875 "rw_ios_per_sec": 0, 00:09:45.875 "rw_mbytes_per_sec": 0, 00:09:45.875 "r_mbytes_per_sec": 0, 00:09:45.875 "w_mbytes_per_sec": 0 00:09:45.875 }, 00:09:45.875 "claimed": true, 00:09:45.875 "claim_type": "exclusive_write", 00:09:45.875 "zoned": false, 00:09:45.875 "supported_io_types": { 00:09:45.875 "read": true, 00:09:45.875 "write": true, 00:09:45.875 "unmap": true, 00:09:45.875 "flush": true, 00:09:45.875 "reset": true, 00:09:45.875 "nvme_admin": false, 00:09:45.875 "nvme_io": false, 00:09:45.875 "nvme_io_md": false, 00:09:45.875 "write_zeroes": true, 00:09:45.875 "zcopy": true, 00:09:45.875 "get_zone_info": false, 00:09:45.875 "zone_management": false, 00:09:45.875 "zone_append": false, 00:09:45.875 "compare": false, 00:09:45.875 "compare_and_write": false, 00:09:45.875 "abort": true, 00:09:45.875 "seek_hole": false, 00:09:45.875 "seek_data": false, 00:09:45.875 "copy": true, 00:09:45.875 "nvme_iov_md": false 00:09:45.875 }, 00:09:45.875 "memory_domains": [ 00:09:45.875 { 00:09:45.875 "dma_device_id": "system", 00:09:45.875 "dma_device_type": 1 00:09:45.875 }, 00:09:45.875 { 00:09:45.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.875 "dma_device_type": 2 00:09:45.875 } 00:09:45.875 ], 00:09:45.875 "driver_specific": {} 00:09:45.875 } 00:09:45.875 ] 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.875 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.134 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.134 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.134 "name": "Existed_Raid", 00:09:46.134 "uuid": "c18ef523-f02d-4b3f-ac67-a71f321f6964", 00:09:46.134 "strip_size_kb": 64, 00:09:46.134 "state": "online", 00:09:46.134 "raid_level": "concat", 00:09:46.134 "superblock": true, 00:09:46.134 "num_base_bdevs": 3, 00:09:46.134 "num_base_bdevs_discovered": 3, 00:09:46.134 "num_base_bdevs_operational": 3, 00:09:46.134 "base_bdevs_list": [ 00:09:46.134 { 00:09:46.134 "name": "BaseBdev1", 00:09:46.134 "uuid": "15418670-4e00-44d9-9cc9-da907a8de87f", 00:09:46.134 "is_configured": true, 00:09:46.134 "data_offset": 2048, 00:09:46.134 "data_size": 63488 00:09:46.134 }, 00:09:46.134 { 00:09:46.134 "name": "BaseBdev2", 00:09:46.134 "uuid": "4ed312dc-4399-4de2-8b7f-3c0ea36792a8", 00:09:46.134 "is_configured": true, 00:09:46.134 "data_offset": 2048, 00:09:46.134 "data_size": 63488 00:09:46.134 }, 00:09:46.134 { 00:09:46.134 "name": "BaseBdev3", 00:09:46.134 "uuid": "9a291e85-3408-457f-a7c5-6e96da546c7c", 00:09:46.134 "is_configured": true, 00:09:46.134 "data_offset": 2048, 00:09:46.134 "data_size": 63488 00:09:46.134 } 00:09:46.134 ] 00:09:46.134 }' 00:09:46.134 07:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.134 07:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.393 [2024-11-29 07:41:36.288535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.393 "name": "Existed_Raid", 00:09:46.393 "aliases": [ 00:09:46.393 "c18ef523-f02d-4b3f-ac67-a71f321f6964" 00:09:46.393 ], 00:09:46.393 "product_name": "Raid Volume", 00:09:46.393 "block_size": 512, 00:09:46.393 "num_blocks": 190464, 00:09:46.393 "uuid": "c18ef523-f02d-4b3f-ac67-a71f321f6964", 00:09:46.393 "assigned_rate_limits": { 00:09:46.393 "rw_ios_per_sec": 0, 00:09:46.393 "rw_mbytes_per_sec": 0, 00:09:46.393 "r_mbytes_per_sec": 0, 00:09:46.393 "w_mbytes_per_sec": 0 00:09:46.393 }, 00:09:46.393 "claimed": false, 00:09:46.393 "zoned": false, 00:09:46.393 "supported_io_types": { 00:09:46.393 "read": true, 00:09:46.393 "write": true, 00:09:46.393 "unmap": true, 00:09:46.393 "flush": true, 00:09:46.393 "reset": true, 00:09:46.393 "nvme_admin": false, 00:09:46.393 "nvme_io": false, 00:09:46.393 "nvme_io_md": false, 00:09:46.393 "write_zeroes": true, 00:09:46.393 "zcopy": false, 00:09:46.393 "get_zone_info": false, 00:09:46.393 "zone_management": false, 00:09:46.393 "zone_append": false, 00:09:46.393 "compare": false, 00:09:46.393 "compare_and_write": false, 00:09:46.393 "abort": false, 00:09:46.393 "seek_hole": false, 00:09:46.393 "seek_data": false, 00:09:46.393 "copy": false, 00:09:46.393 "nvme_iov_md": false 00:09:46.393 }, 00:09:46.393 "memory_domains": [ 00:09:46.393 { 00:09:46.393 "dma_device_id": "system", 00:09:46.393 "dma_device_type": 1 00:09:46.393 }, 00:09:46.393 { 00:09:46.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.393 "dma_device_type": 2 00:09:46.393 }, 00:09:46.393 { 00:09:46.393 "dma_device_id": "system", 00:09:46.393 "dma_device_type": 1 00:09:46.393 }, 00:09:46.393 { 00:09:46.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.393 "dma_device_type": 2 00:09:46.393 }, 00:09:46.393 { 00:09:46.393 "dma_device_id": "system", 00:09:46.393 "dma_device_type": 1 00:09:46.393 }, 00:09:46.393 { 00:09:46.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.393 "dma_device_type": 2 00:09:46.393 } 00:09:46.393 ], 00:09:46.393 "driver_specific": { 00:09:46.393 "raid": { 00:09:46.393 "uuid": "c18ef523-f02d-4b3f-ac67-a71f321f6964", 00:09:46.393 "strip_size_kb": 64, 00:09:46.393 "state": "online", 00:09:46.393 "raid_level": "concat", 00:09:46.393 "superblock": true, 00:09:46.393 "num_base_bdevs": 3, 00:09:46.393 "num_base_bdevs_discovered": 3, 00:09:46.393 "num_base_bdevs_operational": 3, 00:09:46.393 "base_bdevs_list": [ 00:09:46.393 { 00:09:46.393 "name": "BaseBdev1", 00:09:46.393 "uuid": "15418670-4e00-44d9-9cc9-da907a8de87f", 00:09:46.393 "is_configured": true, 00:09:46.393 "data_offset": 2048, 00:09:46.393 "data_size": 63488 00:09:46.393 }, 00:09:46.393 { 00:09:46.393 "name": "BaseBdev2", 00:09:46.393 "uuid": "4ed312dc-4399-4de2-8b7f-3c0ea36792a8", 00:09:46.393 "is_configured": true, 00:09:46.393 "data_offset": 2048, 00:09:46.393 "data_size": 63488 00:09:46.393 }, 00:09:46.393 { 00:09:46.393 "name": "BaseBdev3", 00:09:46.393 "uuid": "9a291e85-3408-457f-a7c5-6e96da546c7c", 00:09:46.393 "is_configured": true, 00:09:46.393 "data_offset": 2048, 00:09:46.393 "data_size": 63488 00:09:46.393 } 00:09:46.393 ] 00:09:46.393 } 00:09:46.393 } 00:09:46.393 }' 00:09:46.393 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.652 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:46.652 BaseBdev2 00:09:46.652 BaseBdev3' 00:09:46.652 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.652 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.652 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.652 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:46.652 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.652 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.652 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.652 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.652 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.653 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.653 [2024-11-29 07:41:36.535819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.653 [2024-11-29 07:41:36.535887] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.653 [2024-11-29 07:41:36.535947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.911 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.911 "name": "Existed_Raid", 00:09:46.911 "uuid": "c18ef523-f02d-4b3f-ac67-a71f321f6964", 00:09:46.911 "strip_size_kb": 64, 00:09:46.912 "state": "offline", 00:09:46.912 "raid_level": "concat", 00:09:46.912 "superblock": true, 00:09:46.912 "num_base_bdevs": 3, 00:09:46.912 "num_base_bdevs_discovered": 2, 00:09:46.912 "num_base_bdevs_operational": 2, 00:09:46.912 "base_bdevs_list": [ 00:09:46.912 { 00:09:46.912 "name": null, 00:09:46.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.912 "is_configured": false, 00:09:46.912 "data_offset": 0, 00:09:46.912 "data_size": 63488 00:09:46.912 }, 00:09:46.912 { 00:09:46.912 "name": "BaseBdev2", 00:09:46.912 "uuid": "4ed312dc-4399-4de2-8b7f-3c0ea36792a8", 00:09:46.912 "is_configured": true, 00:09:46.912 "data_offset": 2048, 00:09:46.912 "data_size": 63488 00:09:46.912 }, 00:09:46.912 { 00:09:46.912 "name": "BaseBdev3", 00:09:46.912 "uuid": "9a291e85-3408-457f-a7c5-6e96da546c7c", 00:09:46.912 "is_configured": true, 00:09:46.912 "data_offset": 2048, 00:09:46.912 "data_size": 63488 00:09:46.912 } 00:09:46.912 ] 00:09:46.912 }' 00:09:46.912 07:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.912 07:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.170 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:47.170 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.170 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.170 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.170 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.170 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.170 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.170 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.170 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.170 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:47.170 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.170 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.170 [2024-11-29 07:41:37.100657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.429 [2024-11-29 07:41:37.252418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:47.429 [2024-11-29 07:41:37.252518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.429 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.689 BaseBdev2 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.689 [ 00:09:47.689 { 00:09:47.689 "name": "BaseBdev2", 00:09:47.689 "aliases": [ 00:09:47.689 "1302421d-5fc9-4d89-92fb-46d035555327" 00:09:47.689 ], 00:09:47.689 "product_name": "Malloc disk", 00:09:47.689 "block_size": 512, 00:09:47.689 "num_blocks": 65536, 00:09:47.689 "uuid": "1302421d-5fc9-4d89-92fb-46d035555327", 00:09:47.689 "assigned_rate_limits": { 00:09:47.689 "rw_ios_per_sec": 0, 00:09:47.689 "rw_mbytes_per_sec": 0, 00:09:47.689 "r_mbytes_per_sec": 0, 00:09:47.689 "w_mbytes_per_sec": 0 00:09:47.689 }, 00:09:47.689 "claimed": false, 00:09:47.689 "zoned": false, 00:09:47.689 "supported_io_types": { 00:09:47.689 "read": true, 00:09:47.689 "write": true, 00:09:47.689 "unmap": true, 00:09:47.689 "flush": true, 00:09:47.689 "reset": true, 00:09:47.689 "nvme_admin": false, 00:09:47.689 "nvme_io": false, 00:09:47.689 "nvme_io_md": false, 00:09:47.689 "write_zeroes": true, 00:09:47.689 "zcopy": true, 00:09:47.689 "get_zone_info": false, 00:09:47.689 "zone_management": false, 00:09:47.689 "zone_append": false, 00:09:47.689 "compare": false, 00:09:47.689 "compare_and_write": false, 00:09:47.689 "abort": true, 00:09:47.689 "seek_hole": false, 00:09:47.689 "seek_data": false, 00:09:47.689 "copy": true, 00:09:47.689 "nvme_iov_md": false 00:09:47.689 }, 00:09:47.689 "memory_domains": [ 00:09:47.689 { 00:09:47.689 "dma_device_id": "system", 00:09:47.689 "dma_device_type": 1 00:09:47.689 }, 00:09:47.689 { 00:09:47.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.689 "dma_device_type": 2 00:09:47.689 } 00:09:47.689 ], 00:09:47.689 "driver_specific": {} 00:09:47.689 } 00:09:47.689 ] 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.689 BaseBdev3 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.689 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.690 [ 00:09:47.690 { 00:09:47.690 "name": "BaseBdev3", 00:09:47.690 "aliases": [ 00:09:47.690 "6dc124ef-fe27-425b-9dd8-db0ceb4603de" 00:09:47.690 ], 00:09:47.690 "product_name": "Malloc disk", 00:09:47.690 "block_size": 512, 00:09:47.690 "num_blocks": 65536, 00:09:47.690 "uuid": "6dc124ef-fe27-425b-9dd8-db0ceb4603de", 00:09:47.690 "assigned_rate_limits": { 00:09:47.690 "rw_ios_per_sec": 0, 00:09:47.690 "rw_mbytes_per_sec": 0, 00:09:47.690 "r_mbytes_per_sec": 0, 00:09:47.690 "w_mbytes_per_sec": 0 00:09:47.690 }, 00:09:47.690 "claimed": false, 00:09:47.690 "zoned": false, 00:09:47.690 "supported_io_types": { 00:09:47.690 "read": true, 00:09:47.690 "write": true, 00:09:47.690 "unmap": true, 00:09:47.690 "flush": true, 00:09:47.690 "reset": true, 00:09:47.690 "nvme_admin": false, 00:09:47.690 "nvme_io": false, 00:09:47.690 "nvme_io_md": false, 00:09:47.690 "write_zeroes": true, 00:09:47.690 "zcopy": true, 00:09:47.690 "get_zone_info": false, 00:09:47.690 "zone_management": false, 00:09:47.690 "zone_append": false, 00:09:47.690 "compare": false, 00:09:47.690 "compare_and_write": false, 00:09:47.690 "abort": true, 00:09:47.690 "seek_hole": false, 00:09:47.690 "seek_data": false, 00:09:47.690 "copy": true, 00:09:47.690 "nvme_iov_md": false 00:09:47.690 }, 00:09:47.690 "memory_domains": [ 00:09:47.690 { 00:09:47.690 "dma_device_id": "system", 00:09:47.690 "dma_device_type": 1 00:09:47.690 }, 00:09:47.690 { 00:09:47.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.690 "dma_device_type": 2 00:09:47.690 } 00:09:47.690 ], 00:09:47.690 "driver_specific": {} 00:09:47.690 } 00:09:47.690 ] 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.690 [2024-11-29 07:41:37.561891] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.690 [2024-11-29 07:41:37.561988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.690 [2024-11-29 07:41:37.562034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.690 [2024-11-29 07:41:37.563823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.690 "name": "Existed_Raid", 00:09:47.690 "uuid": "0b990f21-2956-40cd-8005-f225106bfe5d", 00:09:47.690 "strip_size_kb": 64, 00:09:47.690 "state": "configuring", 00:09:47.690 "raid_level": "concat", 00:09:47.690 "superblock": true, 00:09:47.690 "num_base_bdevs": 3, 00:09:47.690 "num_base_bdevs_discovered": 2, 00:09:47.690 "num_base_bdevs_operational": 3, 00:09:47.690 "base_bdevs_list": [ 00:09:47.690 { 00:09:47.690 "name": "BaseBdev1", 00:09:47.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.690 "is_configured": false, 00:09:47.690 "data_offset": 0, 00:09:47.690 "data_size": 0 00:09:47.690 }, 00:09:47.690 { 00:09:47.690 "name": "BaseBdev2", 00:09:47.690 "uuid": "1302421d-5fc9-4d89-92fb-46d035555327", 00:09:47.690 "is_configured": true, 00:09:47.690 "data_offset": 2048, 00:09:47.690 "data_size": 63488 00:09:47.690 }, 00:09:47.690 { 00:09:47.690 "name": "BaseBdev3", 00:09:47.690 "uuid": "6dc124ef-fe27-425b-9dd8-db0ceb4603de", 00:09:47.690 "is_configured": true, 00:09:47.690 "data_offset": 2048, 00:09:47.690 "data_size": 63488 00:09:47.690 } 00:09:47.690 ] 00:09:47.690 }' 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.690 07:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.258 [2024-11-29 07:41:38.033109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.258 "name": "Existed_Raid", 00:09:48.258 "uuid": "0b990f21-2956-40cd-8005-f225106bfe5d", 00:09:48.258 "strip_size_kb": 64, 00:09:48.258 "state": "configuring", 00:09:48.258 "raid_level": "concat", 00:09:48.258 "superblock": true, 00:09:48.258 "num_base_bdevs": 3, 00:09:48.258 "num_base_bdevs_discovered": 1, 00:09:48.258 "num_base_bdevs_operational": 3, 00:09:48.258 "base_bdevs_list": [ 00:09:48.258 { 00:09:48.258 "name": "BaseBdev1", 00:09:48.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.258 "is_configured": false, 00:09:48.258 "data_offset": 0, 00:09:48.258 "data_size": 0 00:09:48.258 }, 00:09:48.258 { 00:09:48.258 "name": null, 00:09:48.258 "uuid": "1302421d-5fc9-4d89-92fb-46d035555327", 00:09:48.258 "is_configured": false, 00:09:48.258 "data_offset": 0, 00:09:48.258 "data_size": 63488 00:09:48.258 }, 00:09:48.258 { 00:09:48.258 "name": "BaseBdev3", 00:09:48.258 "uuid": "6dc124ef-fe27-425b-9dd8-db0ceb4603de", 00:09:48.258 "is_configured": true, 00:09:48.258 "data_offset": 2048, 00:09:48.258 "data_size": 63488 00:09:48.258 } 00:09:48.258 ] 00:09:48.258 }' 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.258 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.827 [2024-11-29 07:41:38.555896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.827 BaseBdev1 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.827 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.827 [ 00:09:48.827 { 00:09:48.827 "name": "BaseBdev1", 00:09:48.827 "aliases": [ 00:09:48.827 "42412c9e-4822-45aa-bd5f-abe1bece619e" 00:09:48.827 ], 00:09:48.827 "product_name": "Malloc disk", 00:09:48.827 "block_size": 512, 00:09:48.827 "num_blocks": 65536, 00:09:48.827 "uuid": "42412c9e-4822-45aa-bd5f-abe1bece619e", 00:09:48.827 "assigned_rate_limits": { 00:09:48.827 "rw_ios_per_sec": 0, 00:09:48.827 "rw_mbytes_per_sec": 0, 00:09:48.827 "r_mbytes_per_sec": 0, 00:09:48.827 "w_mbytes_per_sec": 0 00:09:48.827 }, 00:09:48.827 "claimed": true, 00:09:48.827 "claim_type": "exclusive_write", 00:09:48.827 "zoned": false, 00:09:48.827 "supported_io_types": { 00:09:48.827 "read": true, 00:09:48.828 "write": true, 00:09:48.828 "unmap": true, 00:09:48.828 "flush": true, 00:09:48.828 "reset": true, 00:09:48.828 "nvme_admin": false, 00:09:48.828 "nvme_io": false, 00:09:48.828 "nvme_io_md": false, 00:09:48.828 "write_zeroes": true, 00:09:48.828 "zcopy": true, 00:09:48.828 "get_zone_info": false, 00:09:48.828 "zone_management": false, 00:09:48.828 "zone_append": false, 00:09:48.828 "compare": false, 00:09:48.828 "compare_and_write": false, 00:09:48.828 "abort": true, 00:09:48.828 "seek_hole": false, 00:09:48.828 "seek_data": false, 00:09:48.828 "copy": true, 00:09:48.828 "nvme_iov_md": false 00:09:48.828 }, 00:09:48.828 "memory_domains": [ 00:09:48.828 { 00:09:48.828 "dma_device_id": "system", 00:09:48.828 "dma_device_type": 1 00:09:48.828 }, 00:09:48.828 { 00:09:48.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.828 "dma_device_type": 2 00:09:48.828 } 00:09:48.828 ], 00:09:48.828 "driver_specific": {} 00:09:48.828 } 00:09:48.828 ] 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.828 "name": "Existed_Raid", 00:09:48.828 "uuid": "0b990f21-2956-40cd-8005-f225106bfe5d", 00:09:48.828 "strip_size_kb": 64, 00:09:48.828 "state": "configuring", 00:09:48.828 "raid_level": "concat", 00:09:48.828 "superblock": true, 00:09:48.828 "num_base_bdevs": 3, 00:09:48.828 "num_base_bdevs_discovered": 2, 00:09:48.828 "num_base_bdevs_operational": 3, 00:09:48.828 "base_bdevs_list": [ 00:09:48.828 { 00:09:48.828 "name": "BaseBdev1", 00:09:48.828 "uuid": "42412c9e-4822-45aa-bd5f-abe1bece619e", 00:09:48.828 "is_configured": true, 00:09:48.828 "data_offset": 2048, 00:09:48.828 "data_size": 63488 00:09:48.828 }, 00:09:48.828 { 00:09:48.828 "name": null, 00:09:48.828 "uuid": "1302421d-5fc9-4d89-92fb-46d035555327", 00:09:48.828 "is_configured": false, 00:09:48.828 "data_offset": 0, 00:09:48.828 "data_size": 63488 00:09:48.828 }, 00:09:48.828 { 00:09:48.828 "name": "BaseBdev3", 00:09:48.828 "uuid": "6dc124ef-fe27-425b-9dd8-db0ceb4603de", 00:09:48.828 "is_configured": true, 00:09:48.828 "data_offset": 2048, 00:09:48.828 "data_size": 63488 00:09:48.828 } 00:09:48.828 ] 00:09:48.828 }' 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.828 07:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.088 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.088 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:49.088 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.088 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.088 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.347 [2024-11-29 07:41:39.055060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.347 "name": "Existed_Raid", 00:09:49.347 "uuid": "0b990f21-2956-40cd-8005-f225106bfe5d", 00:09:49.347 "strip_size_kb": 64, 00:09:49.347 "state": "configuring", 00:09:49.347 "raid_level": "concat", 00:09:49.347 "superblock": true, 00:09:49.347 "num_base_bdevs": 3, 00:09:49.347 "num_base_bdevs_discovered": 1, 00:09:49.347 "num_base_bdevs_operational": 3, 00:09:49.347 "base_bdevs_list": [ 00:09:49.347 { 00:09:49.347 "name": "BaseBdev1", 00:09:49.347 "uuid": "42412c9e-4822-45aa-bd5f-abe1bece619e", 00:09:49.347 "is_configured": true, 00:09:49.347 "data_offset": 2048, 00:09:49.347 "data_size": 63488 00:09:49.347 }, 00:09:49.347 { 00:09:49.347 "name": null, 00:09:49.347 "uuid": "1302421d-5fc9-4d89-92fb-46d035555327", 00:09:49.347 "is_configured": false, 00:09:49.347 "data_offset": 0, 00:09:49.347 "data_size": 63488 00:09:49.347 }, 00:09:49.347 { 00:09:49.347 "name": null, 00:09:49.347 "uuid": "6dc124ef-fe27-425b-9dd8-db0ceb4603de", 00:09:49.347 "is_configured": false, 00:09:49.347 "data_offset": 0, 00:09:49.347 "data_size": 63488 00:09:49.347 } 00:09:49.347 ] 00:09:49.347 }' 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.347 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.606 [2024-11-29 07:41:39.518306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.606 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.865 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.865 "name": "Existed_Raid", 00:09:49.865 "uuid": "0b990f21-2956-40cd-8005-f225106bfe5d", 00:09:49.865 "strip_size_kb": 64, 00:09:49.865 "state": "configuring", 00:09:49.865 "raid_level": "concat", 00:09:49.865 "superblock": true, 00:09:49.865 "num_base_bdevs": 3, 00:09:49.865 "num_base_bdevs_discovered": 2, 00:09:49.865 "num_base_bdevs_operational": 3, 00:09:49.865 "base_bdevs_list": [ 00:09:49.865 { 00:09:49.865 "name": "BaseBdev1", 00:09:49.865 "uuid": "42412c9e-4822-45aa-bd5f-abe1bece619e", 00:09:49.865 "is_configured": true, 00:09:49.865 "data_offset": 2048, 00:09:49.865 "data_size": 63488 00:09:49.865 }, 00:09:49.865 { 00:09:49.865 "name": null, 00:09:49.865 "uuid": "1302421d-5fc9-4d89-92fb-46d035555327", 00:09:49.865 "is_configured": false, 00:09:49.865 "data_offset": 0, 00:09:49.865 "data_size": 63488 00:09:49.865 }, 00:09:49.865 { 00:09:49.865 "name": "BaseBdev3", 00:09:49.865 "uuid": "6dc124ef-fe27-425b-9dd8-db0ceb4603de", 00:09:49.865 "is_configured": true, 00:09:49.865 "data_offset": 2048, 00:09:49.865 "data_size": 63488 00:09:49.865 } 00:09:49.865 ] 00:09:49.865 }' 00:09:49.865 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.865 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.124 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.124 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.124 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.124 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:50.124 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.124 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:50.124 07:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:50.124 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.124 07:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.124 [2024-11-29 07:41:39.949600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.124 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.382 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.382 "name": "Existed_Raid", 00:09:50.382 "uuid": "0b990f21-2956-40cd-8005-f225106bfe5d", 00:09:50.382 "strip_size_kb": 64, 00:09:50.382 "state": "configuring", 00:09:50.382 "raid_level": "concat", 00:09:50.382 "superblock": true, 00:09:50.382 "num_base_bdevs": 3, 00:09:50.382 "num_base_bdevs_discovered": 1, 00:09:50.382 "num_base_bdevs_operational": 3, 00:09:50.382 "base_bdevs_list": [ 00:09:50.382 { 00:09:50.382 "name": null, 00:09:50.382 "uuid": "42412c9e-4822-45aa-bd5f-abe1bece619e", 00:09:50.382 "is_configured": false, 00:09:50.382 "data_offset": 0, 00:09:50.382 "data_size": 63488 00:09:50.382 }, 00:09:50.382 { 00:09:50.382 "name": null, 00:09:50.382 "uuid": "1302421d-5fc9-4d89-92fb-46d035555327", 00:09:50.382 "is_configured": false, 00:09:50.382 "data_offset": 0, 00:09:50.382 "data_size": 63488 00:09:50.382 }, 00:09:50.382 { 00:09:50.382 "name": "BaseBdev3", 00:09:50.382 "uuid": "6dc124ef-fe27-425b-9dd8-db0ceb4603de", 00:09:50.382 "is_configured": true, 00:09:50.382 "data_offset": 2048, 00:09:50.382 "data_size": 63488 00:09:50.382 } 00:09:50.382 ] 00:09:50.382 }' 00:09:50.382 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.382 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.641 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.641 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.641 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.641 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.642 [2024-11-29 07:41:40.538972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.642 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.900 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.900 "name": "Existed_Raid", 00:09:50.900 "uuid": "0b990f21-2956-40cd-8005-f225106bfe5d", 00:09:50.900 "strip_size_kb": 64, 00:09:50.900 "state": "configuring", 00:09:50.900 "raid_level": "concat", 00:09:50.900 "superblock": true, 00:09:50.900 "num_base_bdevs": 3, 00:09:50.900 "num_base_bdevs_discovered": 2, 00:09:50.900 "num_base_bdevs_operational": 3, 00:09:50.900 "base_bdevs_list": [ 00:09:50.900 { 00:09:50.900 "name": null, 00:09:50.900 "uuid": "42412c9e-4822-45aa-bd5f-abe1bece619e", 00:09:50.900 "is_configured": false, 00:09:50.900 "data_offset": 0, 00:09:50.900 "data_size": 63488 00:09:50.900 }, 00:09:50.900 { 00:09:50.900 "name": "BaseBdev2", 00:09:50.900 "uuid": "1302421d-5fc9-4d89-92fb-46d035555327", 00:09:50.900 "is_configured": true, 00:09:50.900 "data_offset": 2048, 00:09:50.900 "data_size": 63488 00:09:50.900 }, 00:09:50.900 { 00:09:50.900 "name": "BaseBdev3", 00:09:50.900 "uuid": "6dc124ef-fe27-425b-9dd8-db0ceb4603de", 00:09:50.900 "is_configured": true, 00:09:50.900 "data_offset": 2048, 00:09:50.900 "data_size": 63488 00:09:50.901 } 00:09:50.901 ] 00:09:50.901 }' 00:09:50.901 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.901 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.159 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.160 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:51.160 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.160 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.160 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.160 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:51.160 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.160 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.160 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:51.160 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 42412c9e-4822-45aa-bd5f-abe1bece619e 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.160 [2024-11-29 07:41:41.081437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:51.160 [2024-11-29 07:41:41.081642] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:51.160 [2024-11-29 07:41:41.081659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:51.160 [2024-11-29 07:41:41.081903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:51.160 [2024-11-29 07:41:41.082055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:51.160 [2024-11-29 07:41:41.082064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:09:51.160 id_bdev 0x617000008200 00:09:51.160 [2024-11-29 07:41:41.082205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.160 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.420 [ 00:09:51.420 { 00:09:51.420 "name": "NewBaseBdev", 00:09:51.420 "aliases": [ 00:09:51.420 "42412c9e-4822-45aa-bd5f-abe1bece619e" 00:09:51.420 ], 00:09:51.420 "product_name": "Malloc disk", 00:09:51.420 "block_size": 512, 00:09:51.420 "num_blocks": 65536, 00:09:51.420 "uuid": "42412c9e-4822-45aa-bd5f-abe1bece619e", 00:09:51.420 "assigned_rate_limits": { 00:09:51.420 "rw_ios_per_sec": 0, 00:09:51.420 "rw_mbytes_per_sec": 0, 00:09:51.420 "r_mbytes_per_sec": 0, 00:09:51.420 "w_mbytes_per_sec": 0 00:09:51.420 }, 00:09:51.420 "claimed": true, 00:09:51.420 "claim_type": "exclusive_write", 00:09:51.420 "zoned": false, 00:09:51.420 "supported_io_types": { 00:09:51.420 "read": true, 00:09:51.420 "write": true, 00:09:51.420 "unmap": true, 00:09:51.420 "flush": true, 00:09:51.420 "reset": true, 00:09:51.420 "nvme_admin": false, 00:09:51.420 "nvme_io": false, 00:09:51.420 "nvme_io_md": false, 00:09:51.420 "write_zeroes": true, 00:09:51.420 "zcopy": true, 00:09:51.420 "get_zone_info": false, 00:09:51.420 "zone_management": false, 00:09:51.420 "zone_append": false, 00:09:51.420 "compare": false, 00:09:51.420 "compare_and_write": false, 00:09:51.420 "abort": true, 00:09:51.420 "seek_hole": false, 00:09:51.420 "seek_data": false, 00:09:51.420 "copy": true, 00:09:51.420 "nvme_iov_md": false 00:09:51.420 }, 00:09:51.420 "memory_domains": [ 00:09:51.420 { 00:09:51.420 "dma_device_id": "system", 00:09:51.420 "dma_device_type": 1 00:09:51.420 }, 00:09:51.420 { 00:09:51.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.420 "dma_device_type": 2 00:09:51.420 } 00:09:51.420 ], 00:09:51.420 "driver_specific": {} 00:09:51.420 } 00:09:51.420 ] 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.420 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.420 "name": "Existed_Raid", 00:09:51.420 "uuid": "0b990f21-2956-40cd-8005-f225106bfe5d", 00:09:51.420 "strip_size_kb": 64, 00:09:51.420 "state": "online", 00:09:51.420 "raid_level": "concat", 00:09:51.420 "superblock": true, 00:09:51.420 "num_base_bdevs": 3, 00:09:51.420 "num_base_bdevs_discovered": 3, 00:09:51.420 "num_base_bdevs_operational": 3, 00:09:51.420 "base_bdevs_list": [ 00:09:51.420 { 00:09:51.420 "name": "NewBaseBdev", 00:09:51.420 "uuid": "42412c9e-4822-45aa-bd5f-abe1bece619e", 00:09:51.420 "is_configured": true, 00:09:51.420 "data_offset": 2048, 00:09:51.420 "data_size": 63488 00:09:51.420 }, 00:09:51.420 { 00:09:51.420 "name": "BaseBdev2", 00:09:51.420 "uuid": "1302421d-5fc9-4d89-92fb-46d035555327", 00:09:51.420 "is_configured": true, 00:09:51.420 "data_offset": 2048, 00:09:51.420 "data_size": 63488 00:09:51.420 }, 00:09:51.420 { 00:09:51.420 "name": "BaseBdev3", 00:09:51.420 "uuid": "6dc124ef-fe27-425b-9dd8-db0ceb4603de", 00:09:51.420 "is_configured": true, 00:09:51.420 "data_offset": 2048, 00:09:51.420 "data_size": 63488 00:09:51.420 } 00:09:51.420 ] 00:09:51.420 }' 00:09:51.421 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.421 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.680 [2024-11-29 07:41:41.572999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.680 "name": "Existed_Raid", 00:09:51.680 "aliases": [ 00:09:51.680 "0b990f21-2956-40cd-8005-f225106bfe5d" 00:09:51.680 ], 00:09:51.680 "product_name": "Raid Volume", 00:09:51.680 "block_size": 512, 00:09:51.680 "num_blocks": 190464, 00:09:51.680 "uuid": "0b990f21-2956-40cd-8005-f225106bfe5d", 00:09:51.680 "assigned_rate_limits": { 00:09:51.680 "rw_ios_per_sec": 0, 00:09:51.680 "rw_mbytes_per_sec": 0, 00:09:51.680 "r_mbytes_per_sec": 0, 00:09:51.680 "w_mbytes_per_sec": 0 00:09:51.680 }, 00:09:51.680 "claimed": false, 00:09:51.680 "zoned": false, 00:09:51.680 "supported_io_types": { 00:09:51.680 "read": true, 00:09:51.680 "write": true, 00:09:51.680 "unmap": true, 00:09:51.680 "flush": true, 00:09:51.680 "reset": true, 00:09:51.680 "nvme_admin": false, 00:09:51.680 "nvme_io": false, 00:09:51.680 "nvme_io_md": false, 00:09:51.680 "write_zeroes": true, 00:09:51.680 "zcopy": false, 00:09:51.680 "get_zone_info": false, 00:09:51.680 "zone_management": false, 00:09:51.680 "zone_append": false, 00:09:51.680 "compare": false, 00:09:51.680 "compare_and_write": false, 00:09:51.680 "abort": false, 00:09:51.680 "seek_hole": false, 00:09:51.680 "seek_data": false, 00:09:51.680 "copy": false, 00:09:51.680 "nvme_iov_md": false 00:09:51.680 }, 00:09:51.680 "memory_domains": [ 00:09:51.680 { 00:09:51.680 "dma_device_id": "system", 00:09:51.680 "dma_device_type": 1 00:09:51.680 }, 00:09:51.680 { 00:09:51.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.680 "dma_device_type": 2 00:09:51.680 }, 00:09:51.680 { 00:09:51.680 "dma_device_id": "system", 00:09:51.680 "dma_device_type": 1 00:09:51.680 }, 00:09:51.680 { 00:09:51.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.680 "dma_device_type": 2 00:09:51.680 }, 00:09:51.680 { 00:09:51.680 "dma_device_id": "system", 00:09:51.680 "dma_device_type": 1 00:09:51.680 }, 00:09:51.680 { 00:09:51.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.680 "dma_device_type": 2 00:09:51.680 } 00:09:51.680 ], 00:09:51.680 "driver_specific": { 00:09:51.680 "raid": { 00:09:51.680 "uuid": "0b990f21-2956-40cd-8005-f225106bfe5d", 00:09:51.680 "strip_size_kb": 64, 00:09:51.680 "state": "online", 00:09:51.680 "raid_level": "concat", 00:09:51.680 "superblock": true, 00:09:51.680 "num_base_bdevs": 3, 00:09:51.680 "num_base_bdevs_discovered": 3, 00:09:51.680 "num_base_bdevs_operational": 3, 00:09:51.680 "base_bdevs_list": [ 00:09:51.680 { 00:09:51.680 "name": "NewBaseBdev", 00:09:51.680 "uuid": "42412c9e-4822-45aa-bd5f-abe1bece619e", 00:09:51.680 "is_configured": true, 00:09:51.680 "data_offset": 2048, 00:09:51.680 "data_size": 63488 00:09:51.680 }, 00:09:51.680 { 00:09:51.680 "name": "BaseBdev2", 00:09:51.680 "uuid": "1302421d-5fc9-4d89-92fb-46d035555327", 00:09:51.680 "is_configured": true, 00:09:51.680 "data_offset": 2048, 00:09:51.680 "data_size": 63488 00:09:51.680 }, 00:09:51.680 { 00:09:51.680 "name": "BaseBdev3", 00:09:51.680 "uuid": "6dc124ef-fe27-425b-9dd8-db0ceb4603de", 00:09:51.680 "is_configured": true, 00:09:51.680 "data_offset": 2048, 00:09:51.680 "data_size": 63488 00:09:51.680 } 00:09:51.680 ] 00:09:51.680 } 00:09:51.680 } 00:09:51.680 }' 00:09:51.680 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:51.940 BaseBdev2 00:09:51.940 BaseBdev3' 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.940 [2024-11-29 07:41:41.872151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.940 [2024-11-29 07:41:41.872220] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.940 [2024-11-29 07:41:41.872319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.940 [2024-11-29 07:41:41.872377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.940 [2024-11-29 07:41:41.872390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66031 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66031 ']' 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66031 00:09:51.940 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:52.199 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.199 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66031 00:09:52.199 killing process with pid 66031 00:09:52.199 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.199 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.199 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66031' 00:09:52.199 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66031 00:09:52.199 [2024-11-29 07:41:41.918996] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.199 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66031 00:09:52.460 [2024-11-29 07:41:42.206679] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.397 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:53.397 00:09:53.397 real 0m10.398s 00:09:53.397 user 0m16.593s 00:09:53.397 sys 0m1.801s 00:09:53.397 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.397 ************************************ 00:09:53.397 END TEST raid_state_function_test_sb 00:09:53.397 ************************************ 00:09:53.397 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.656 07:41:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:53.656 07:41:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:53.656 07:41:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.656 07:41:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.656 ************************************ 00:09:53.656 START TEST raid_superblock_test 00:09:53.656 ************************************ 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66652 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66652 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66652 ']' 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.656 07:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.656 [2024-11-29 07:41:43.456000] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:53.656 [2024-11-29 07:41:43.456560] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66652 ] 00:09:53.916 [2024-11-29 07:41:43.631512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.916 [2024-11-29 07:41:43.741217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.174 [2024-11-29 07:41:43.940426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.174 [2024-11-29 07:41:43.940574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.433 malloc1 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.433 [2024-11-29 07:41:44.330696] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:54.433 [2024-11-29 07:41:44.330812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.433 [2024-11-29 07:41:44.330852] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:54.433 [2024-11-29 07:41:44.330880] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.433 [2024-11-29 07:41:44.332982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.433 [2024-11-29 07:41:44.333055] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:54.433 pt1 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.433 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.692 malloc2 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.692 [2024-11-29 07:41:44.389899] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.692 [2024-11-29 07:41:44.389956] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.692 [2024-11-29 07:41:44.389981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:54.692 [2024-11-29 07:41:44.389989] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.692 [2024-11-29 07:41:44.392038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.692 [2024-11-29 07:41:44.392130] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.692 pt2 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.692 malloc3 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.692 [2024-11-29 07:41:44.461558] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:54.692 [2024-11-29 07:41:44.461614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.692 [2024-11-29 07:41:44.461635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:54.692 [2024-11-29 07:41:44.461644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.692 [2024-11-29 07:41:44.463665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.692 [2024-11-29 07:41:44.463755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:54.692 pt3 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.692 [2024-11-29 07:41:44.473590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:54.692 [2024-11-29 07:41:44.475367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.692 [2024-11-29 07:41:44.475468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:54.692 [2024-11-29 07:41:44.475664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:54.692 [2024-11-29 07:41:44.475712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:54.692 [2024-11-29 07:41:44.475967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:54.692 [2024-11-29 07:41:44.476167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:54.692 [2024-11-29 07:41:44.476209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:54.692 [2024-11-29 07:41:44.476408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.692 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.693 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.693 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.693 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.693 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.693 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.693 "name": "raid_bdev1", 00:09:54.693 "uuid": "def5216f-4f21-4fcb-a34a-2d4f662d7e83", 00:09:54.693 "strip_size_kb": 64, 00:09:54.693 "state": "online", 00:09:54.693 "raid_level": "concat", 00:09:54.693 "superblock": true, 00:09:54.693 "num_base_bdevs": 3, 00:09:54.693 "num_base_bdevs_discovered": 3, 00:09:54.693 "num_base_bdevs_operational": 3, 00:09:54.693 "base_bdevs_list": [ 00:09:54.693 { 00:09:54.693 "name": "pt1", 00:09:54.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.693 "is_configured": true, 00:09:54.693 "data_offset": 2048, 00:09:54.693 "data_size": 63488 00:09:54.693 }, 00:09:54.693 { 00:09:54.693 "name": "pt2", 00:09:54.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.693 "is_configured": true, 00:09:54.693 "data_offset": 2048, 00:09:54.693 "data_size": 63488 00:09:54.693 }, 00:09:54.693 { 00:09:54.693 "name": "pt3", 00:09:54.693 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.693 "is_configured": true, 00:09:54.693 "data_offset": 2048, 00:09:54.693 "data_size": 63488 00:09:54.693 } 00:09:54.693 ] 00:09:54.693 }' 00:09:54.693 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.693 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.261 [2024-11-29 07:41:44.933115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.261 "name": "raid_bdev1", 00:09:55.261 "aliases": [ 00:09:55.261 "def5216f-4f21-4fcb-a34a-2d4f662d7e83" 00:09:55.261 ], 00:09:55.261 "product_name": "Raid Volume", 00:09:55.261 "block_size": 512, 00:09:55.261 "num_blocks": 190464, 00:09:55.261 "uuid": "def5216f-4f21-4fcb-a34a-2d4f662d7e83", 00:09:55.261 "assigned_rate_limits": { 00:09:55.261 "rw_ios_per_sec": 0, 00:09:55.261 "rw_mbytes_per_sec": 0, 00:09:55.261 "r_mbytes_per_sec": 0, 00:09:55.261 "w_mbytes_per_sec": 0 00:09:55.261 }, 00:09:55.261 "claimed": false, 00:09:55.261 "zoned": false, 00:09:55.261 "supported_io_types": { 00:09:55.261 "read": true, 00:09:55.261 "write": true, 00:09:55.261 "unmap": true, 00:09:55.261 "flush": true, 00:09:55.261 "reset": true, 00:09:55.261 "nvme_admin": false, 00:09:55.261 "nvme_io": false, 00:09:55.261 "nvme_io_md": false, 00:09:55.261 "write_zeroes": true, 00:09:55.261 "zcopy": false, 00:09:55.261 "get_zone_info": false, 00:09:55.261 "zone_management": false, 00:09:55.261 "zone_append": false, 00:09:55.261 "compare": false, 00:09:55.261 "compare_and_write": false, 00:09:55.261 "abort": false, 00:09:55.261 "seek_hole": false, 00:09:55.261 "seek_data": false, 00:09:55.261 "copy": false, 00:09:55.261 "nvme_iov_md": false 00:09:55.261 }, 00:09:55.261 "memory_domains": [ 00:09:55.261 { 00:09:55.261 "dma_device_id": "system", 00:09:55.261 "dma_device_type": 1 00:09:55.261 }, 00:09:55.261 { 00:09:55.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.261 "dma_device_type": 2 00:09:55.261 }, 00:09:55.261 { 00:09:55.261 "dma_device_id": "system", 00:09:55.261 "dma_device_type": 1 00:09:55.261 }, 00:09:55.261 { 00:09:55.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.261 "dma_device_type": 2 00:09:55.261 }, 00:09:55.261 { 00:09:55.261 "dma_device_id": "system", 00:09:55.261 "dma_device_type": 1 00:09:55.261 }, 00:09:55.261 { 00:09:55.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.261 "dma_device_type": 2 00:09:55.261 } 00:09:55.261 ], 00:09:55.261 "driver_specific": { 00:09:55.261 "raid": { 00:09:55.261 "uuid": "def5216f-4f21-4fcb-a34a-2d4f662d7e83", 00:09:55.261 "strip_size_kb": 64, 00:09:55.261 "state": "online", 00:09:55.261 "raid_level": "concat", 00:09:55.261 "superblock": true, 00:09:55.261 "num_base_bdevs": 3, 00:09:55.261 "num_base_bdevs_discovered": 3, 00:09:55.261 "num_base_bdevs_operational": 3, 00:09:55.261 "base_bdevs_list": [ 00:09:55.261 { 00:09:55.261 "name": "pt1", 00:09:55.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.261 "is_configured": true, 00:09:55.261 "data_offset": 2048, 00:09:55.261 "data_size": 63488 00:09:55.261 }, 00:09:55.261 { 00:09:55.261 "name": "pt2", 00:09:55.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.261 "is_configured": true, 00:09:55.261 "data_offset": 2048, 00:09:55.261 "data_size": 63488 00:09:55.261 }, 00:09:55.261 { 00:09:55.261 "name": "pt3", 00:09:55.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.261 "is_configured": true, 00:09:55.261 "data_offset": 2048, 00:09:55.261 "data_size": 63488 00:09:55.261 } 00:09:55.261 ] 00:09:55.261 } 00:09:55.261 } 00:09:55.261 }' 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:55.261 pt2 00:09:55.261 pt3' 00:09:55.261 07:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.261 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.261 [2024-11-29 07:41:45.184578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=def5216f-4f21-4fcb-a34a-2d4f662d7e83 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z def5216f-4f21-4fcb-a34a-2d4f662d7e83 ']' 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.520 [2024-11-29 07:41:45.224248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.520 [2024-11-29 07:41:45.224311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.520 [2024-11-29 07:41:45.224410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.520 [2024-11-29 07:41:45.224499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.520 [2024-11-29 07:41:45.224543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.520 [2024-11-29 07:41:45.372073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:55.520 [2024-11-29 07:41:45.373894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:55.520 [2024-11-29 07:41:45.373942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:55.520 [2024-11-29 07:41:45.373992] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:55.520 [2024-11-29 07:41:45.374045] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:55.520 [2024-11-29 07:41:45.374064] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:55.520 [2024-11-29 07:41:45.374081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.520 [2024-11-29 07:41:45.374090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:55.520 request: 00:09:55.520 { 00:09:55.520 "name": "raid_bdev1", 00:09:55.520 "raid_level": "concat", 00:09:55.520 "base_bdevs": [ 00:09:55.520 "malloc1", 00:09:55.520 "malloc2", 00:09:55.520 "malloc3" 00:09:55.520 ], 00:09:55.520 "strip_size_kb": 64, 00:09:55.520 "superblock": false, 00:09:55.520 "method": "bdev_raid_create", 00:09:55.520 "req_id": 1 00:09:55.520 } 00:09:55.520 Got JSON-RPC error response 00:09:55.520 response: 00:09:55.520 { 00:09:55.520 "code": -17, 00:09:55.520 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:55.520 } 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:55.520 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.521 [2024-11-29 07:41:45.435936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:55.521 [2024-11-29 07:41:45.436081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.521 [2024-11-29 07:41:45.436134] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:55.521 [2024-11-29 07:41:45.436166] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.521 [2024-11-29 07:41:45.438495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.521 [2024-11-29 07:41:45.438565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:55.521 [2024-11-29 07:41:45.438696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:55.521 [2024-11-29 07:41:45.438778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:55.521 pt1 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.521 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.779 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.779 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.779 "name": "raid_bdev1", 00:09:55.779 "uuid": "def5216f-4f21-4fcb-a34a-2d4f662d7e83", 00:09:55.779 "strip_size_kb": 64, 00:09:55.779 "state": "configuring", 00:09:55.779 "raid_level": "concat", 00:09:55.779 "superblock": true, 00:09:55.779 "num_base_bdevs": 3, 00:09:55.779 "num_base_bdevs_discovered": 1, 00:09:55.779 "num_base_bdevs_operational": 3, 00:09:55.779 "base_bdevs_list": [ 00:09:55.779 { 00:09:55.779 "name": "pt1", 00:09:55.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.779 "is_configured": true, 00:09:55.779 "data_offset": 2048, 00:09:55.779 "data_size": 63488 00:09:55.779 }, 00:09:55.779 { 00:09:55.779 "name": null, 00:09:55.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.779 "is_configured": false, 00:09:55.779 "data_offset": 2048, 00:09:55.779 "data_size": 63488 00:09:55.779 }, 00:09:55.779 { 00:09:55.779 "name": null, 00:09:55.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.779 "is_configured": false, 00:09:55.779 "data_offset": 2048, 00:09:55.779 "data_size": 63488 00:09:55.779 } 00:09:55.779 ] 00:09:55.779 }' 00:09:55.779 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.779 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.038 [2024-11-29 07:41:45.887220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:56.038 [2024-11-29 07:41:45.887346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.038 [2024-11-29 07:41:45.887393] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:56.038 [2024-11-29 07:41:45.887425] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.038 [2024-11-29 07:41:45.887905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.038 [2024-11-29 07:41:45.887964] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:56.038 [2024-11-29 07:41:45.888088] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:56.038 [2024-11-29 07:41:45.888164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.038 pt2 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.038 [2024-11-29 07:41:45.895214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.038 "name": "raid_bdev1", 00:09:56.038 "uuid": "def5216f-4f21-4fcb-a34a-2d4f662d7e83", 00:09:56.038 "strip_size_kb": 64, 00:09:56.038 "state": "configuring", 00:09:56.038 "raid_level": "concat", 00:09:56.038 "superblock": true, 00:09:56.038 "num_base_bdevs": 3, 00:09:56.038 "num_base_bdevs_discovered": 1, 00:09:56.038 "num_base_bdevs_operational": 3, 00:09:56.038 "base_bdevs_list": [ 00:09:56.038 { 00:09:56.038 "name": "pt1", 00:09:56.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.038 "is_configured": true, 00:09:56.038 "data_offset": 2048, 00:09:56.038 "data_size": 63488 00:09:56.038 }, 00:09:56.038 { 00:09:56.038 "name": null, 00:09:56.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.038 "is_configured": false, 00:09:56.038 "data_offset": 0, 00:09:56.038 "data_size": 63488 00:09:56.038 }, 00:09:56.038 { 00:09:56.038 "name": null, 00:09:56.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.038 "is_configured": false, 00:09:56.038 "data_offset": 2048, 00:09:56.038 "data_size": 63488 00:09:56.038 } 00:09:56.038 ] 00:09:56.038 }' 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.038 07:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.605 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.606 [2024-11-29 07:41:46.338396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:56.606 [2024-11-29 07:41:46.338466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.606 [2024-11-29 07:41:46.338486] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:56.606 [2024-11-29 07:41:46.338497] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.606 [2024-11-29 07:41:46.338968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.606 [2024-11-29 07:41:46.338989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:56.606 [2024-11-29 07:41:46.339070] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:56.606 [2024-11-29 07:41:46.339108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.606 pt2 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.606 [2024-11-29 07:41:46.350345] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:56.606 [2024-11-29 07:41:46.350435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.606 [2024-11-29 07:41:46.350454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:56.606 [2024-11-29 07:41:46.350463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.606 [2024-11-29 07:41:46.350832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.606 [2024-11-29 07:41:46.350859] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:56.606 [2024-11-29 07:41:46.350917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:56.606 [2024-11-29 07:41:46.350938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:56.606 [2024-11-29 07:41:46.351052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:56.606 [2024-11-29 07:41:46.351062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:56.606 [2024-11-29 07:41:46.351347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:56.606 [2024-11-29 07:41:46.351512] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:56.606 [2024-11-29 07:41:46.351521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:56.606 [2024-11-29 07:41:46.351682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.606 pt3 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.606 "name": "raid_bdev1", 00:09:56.606 "uuid": "def5216f-4f21-4fcb-a34a-2d4f662d7e83", 00:09:56.606 "strip_size_kb": 64, 00:09:56.606 "state": "online", 00:09:56.606 "raid_level": "concat", 00:09:56.606 "superblock": true, 00:09:56.606 "num_base_bdevs": 3, 00:09:56.606 "num_base_bdevs_discovered": 3, 00:09:56.606 "num_base_bdevs_operational": 3, 00:09:56.606 "base_bdevs_list": [ 00:09:56.606 { 00:09:56.606 "name": "pt1", 00:09:56.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.606 "is_configured": true, 00:09:56.606 "data_offset": 2048, 00:09:56.606 "data_size": 63488 00:09:56.606 }, 00:09:56.606 { 00:09:56.606 "name": "pt2", 00:09:56.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.606 "is_configured": true, 00:09:56.606 "data_offset": 2048, 00:09:56.606 "data_size": 63488 00:09:56.606 }, 00:09:56.606 { 00:09:56.606 "name": "pt3", 00:09:56.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.606 "is_configured": true, 00:09:56.606 "data_offset": 2048, 00:09:56.606 "data_size": 63488 00:09:56.606 } 00:09:56.606 ] 00:09:56.606 }' 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.606 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.865 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:56.865 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:56.865 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.865 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.865 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.865 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.865 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.865 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:56.865 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.865 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.865 [2024-11-29 07:41:46.793908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.124 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.125 "name": "raid_bdev1", 00:09:57.125 "aliases": [ 00:09:57.125 "def5216f-4f21-4fcb-a34a-2d4f662d7e83" 00:09:57.125 ], 00:09:57.125 "product_name": "Raid Volume", 00:09:57.125 "block_size": 512, 00:09:57.125 "num_blocks": 190464, 00:09:57.125 "uuid": "def5216f-4f21-4fcb-a34a-2d4f662d7e83", 00:09:57.125 "assigned_rate_limits": { 00:09:57.125 "rw_ios_per_sec": 0, 00:09:57.125 "rw_mbytes_per_sec": 0, 00:09:57.125 "r_mbytes_per_sec": 0, 00:09:57.125 "w_mbytes_per_sec": 0 00:09:57.125 }, 00:09:57.125 "claimed": false, 00:09:57.125 "zoned": false, 00:09:57.125 "supported_io_types": { 00:09:57.125 "read": true, 00:09:57.125 "write": true, 00:09:57.125 "unmap": true, 00:09:57.125 "flush": true, 00:09:57.125 "reset": true, 00:09:57.125 "nvme_admin": false, 00:09:57.125 "nvme_io": false, 00:09:57.125 "nvme_io_md": false, 00:09:57.125 "write_zeroes": true, 00:09:57.125 "zcopy": false, 00:09:57.125 "get_zone_info": false, 00:09:57.125 "zone_management": false, 00:09:57.125 "zone_append": false, 00:09:57.125 "compare": false, 00:09:57.125 "compare_and_write": false, 00:09:57.125 "abort": false, 00:09:57.125 "seek_hole": false, 00:09:57.125 "seek_data": false, 00:09:57.125 "copy": false, 00:09:57.125 "nvme_iov_md": false 00:09:57.125 }, 00:09:57.125 "memory_domains": [ 00:09:57.125 { 00:09:57.125 "dma_device_id": "system", 00:09:57.125 "dma_device_type": 1 00:09:57.125 }, 00:09:57.125 { 00:09:57.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.125 "dma_device_type": 2 00:09:57.125 }, 00:09:57.125 { 00:09:57.125 "dma_device_id": "system", 00:09:57.125 "dma_device_type": 1 00:09:57.125 }, 00:09:57.125 { 00:09:57.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.125 "dma_device_type": 2 00:09:57.125 }, 00:09:57.125 { 00:09:57.125 "dma_device_id": "system", 00:09:57.125 "dma_device_type": 1 00:09:57.125 }, 00:09:57.125 { 00:09:57.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.125 "dma_device_type": 2 00:09:57.125 } 00:09:57.125 ], 00:09:57.125 "driver_specific": { 00:09:57.125 "raid": { 00:09:57.125 "uuid": "def5216f-4f21-4fcb-a34a-2d4f662d7e83", 00:09:57.125 "strip_size_kb": 64, 00:09:57.125 "state": "online", 00:09:57.125 "raid_level": "concat", 00:09:57.125 "superblock": true, 00:09:57.125 "num_base_bdevs": 3, 00:09:57.125 "num_base_bdevs_discovered": 3, 00:09:57.125 "num_base_bdevs_operational": 3, 00:09:57.125 "base_bdevs_list": [ 00:09:57.125 { 00:09:57.125 "name": "pt1", 00:09:57.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.125 "is_configured": true, 00:09:57.125 "data_offset": 2048, 00:09:57.125 "data_size": 63488 00:09:57.125 }, 00:09:57.125 { 00:09:57.125 "name": "pt2", 00:09:57.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.125 "is_configured": true, 00:09:57.125 "data_offset": 2048, 00:09:57.125 "data_size": 63488 00:09:57.125 }, 00:09:57.125 { 00:09:57.125 "name": "pt3", 00:09:57.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.125 "is_configured": true, 00:09:57.125 "data_offset": 2048, 00:09:57.125 "data_size": 63488 00:09:57.125 } 00:09:57.125 ] 00:09:57.125 } 00:09:57.125 } 00:09:57.125 }' 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:57.125 pt2 00:09:57.125 pt3' 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.125 07:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.125 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.125 [2024-11-29 07:41:47.061405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' def5216f-4f21-4fcb-a34a-2d4f662d7e83 '!=' def5216f-4f21-4fcb-a34a-2d4f662d7e83 ']' 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66652 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66652 ']' 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66652 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66652 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66652' 00:09:57.384 killing process with pid 66652 00:09:57.384 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66652 00:09:57.385 [2024-11-29 07:41:47.143268] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:57.385 [2024-11-29 07:41:47.143422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.385 [2024-11-29 07:41:47.143519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.385 07:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66652 00:09:57.385 [2024-11-29 07:41:47.143590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:57.644 [2024-11-29 07:41:47.432598] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:58.583 07:41:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:58.583 00:09:58.583 real 0m5.142s 00:09:58.583 user 0m7.441s 00:09:58.583 sys 0m0.851s 00:09:58.583 07:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.583 07:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.583 ************************************ 00:09:58.583 END TEST raid_superblock_test 00:09:58.583 ************************************ 00:09:58.843 07:41:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:58.843 07:41:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:58.843 07:41:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.843 07:41:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.843 ************************************ 00:09:58.843 START TEST raid_read_error_test 00:09:58.843 ************************************ 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Mv0DGfl5N1 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66905 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66905 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66905 ']' 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.843 07:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.843 [2024-11-29 07:41:48.686006] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:58.843 [2024-11-29 07:41:48.686234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66905 ] 00:09:59.102 [2024-11-29 07:41:48.838631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.102 [2024-11-29 07:41:48.947689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.361 [2024-11-29 07:41:49.140292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.361 [2024-11-29 07:41:49.140351] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.620 BaseBdev1_malloc 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.620 true 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.620 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.881 [2024-11-29 07:41:49.565304] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:59.881 [2024-11-29 07:41:49.565399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.881 [2024-11-29 07:41:49.565422] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:59.881 [2024-11-29 07:41:49.565432] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.881 [2024-11-29 07:41:49.567568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.881 [2024-11-29 07:41:49.567608] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:59.881 BaseBdev1 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.881 BaseBdev2_malloc 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.881 true 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.881 [2024-11-29 07:41:49.629571] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:59.881 [2024-11-29 07:41:49.629623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.881 [2024-11-29 07:41:49.629638] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:59.881 [2024-11-29 07:41:49.629648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.881 [2024-11-29 07:41:49.631705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.881 [2024-11-29 07:41:49.631796] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:59.881 BaseBdev2 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.881 BaseBdev3_malloc 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.881 true 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.881 [2024-11-29 07:41:49.706308] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:59.881 [2024-11-29 07:41:49.706357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.881 [2024-11-29 07:41:49.706389] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:59.881 [2024-11-29 07:41:49.706399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.881 [2024-11-29 07:41:49.708409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.881 [2024-11-29 07:41:49.708511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:59.881 BaseBdev3 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.881 [2024-11-29 07:41:49.718365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.881 [2024-11-29 07:41:49.720072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.881 [2024-11-29 07:41:49.720203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.881 [2024-11-29 07:41:49.720403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:59.881 [2024-11-29 07:41:49.720416] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:59.881 [2024-11-29 07:41:49.720645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:59.881 [2024-11-29 07:41:49.720794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:59.881 [2024-11-29 07:41:49.720807] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:59.881 [2024-11-29 07:41:49.720932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.881 "name": "raid_bdev1", 00:09:59.881 "uuid": "a1ba3922-fe4a-4564-821f-b61fc400a613", 00:09:59.881 "strip_size_kb": 64, 00:09:59.881 "state": "online", 00:09:59.881 "raid_level": "concat", 00:09:59.881 "superblock": true, 00:09:59.881 "num_base_bdevs": 3, 00:09:59.881 "num_base_bdevs_discovered": 3, 00:09:59.881 "num_base_bdevs_operational": 3, 00:09:59.881 "base_bdevs_list": [ 00:09:59.881 { 00:09:59.881 "name": "BaseBdev1", 00:09:59.881 "uuid": "2565ed7d-4135-5e7d-b50f-65edfd94d389", 00:09:59.881 "is_configured": true, 00:09:59.881 "data_offset": 2048, 00:09:59.881 "data_size": 63488 00:09:59.881 }, 00:09:59.881 { 00:09:59.881 "name": "BaseBdev2", 00:09:59.881 "uuid": "6de7d38a-923d-5086-ad9e-9f04533bb115", 00:09:59.881 "is_configured": true, 00:09:59.881 "data_offset": 2048, 00:09:59.881 "data_size": 63488 00:09:59.881 }, 00:09:59.881 { 00:09:59.881 "name": "BaseBdev3", 00:09:59.881 "uuid": "79cbdfac-7a91-5211-aee0-47e786fac820", 00:09:59.881 "is_configured": true, 00:09:59.881 "data_offset": 2048, 00:09:59.881 "data_size": 63488 00:09:59.881 } 00:09:59.881 ] 00:09:59.881 }' 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.881 07:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.450 07:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:00.450 07:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:00.450 [2024-11-29 07:41:50.294825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:01.387 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.388 "name": "raid_bdev1", 00:10:01.388 "uuid": "a1ba3922-fe4a-4564-821f-b61fc400a613", 00:10:01.388 "strip_size_kb": 64, 00:10:01.388 "state": "online", 00:10:01.388 "raid_level": "concat", 00:10:01.388 "superblock": true, 00:10:01.388 "num_base_bdevs": 3, 00:10:01.388 "num_base_bdevs_discovered": 3, 00:10:01.388 "num_base_bdevs_operational": 3, 00:10:01.388 "base_bdevs_list": [ 00:10:01.388 { 00:10:01.388 "name": "BaseBdev1", 00:10:01.388 "uuid": "2565ed7d-4135-5e7d-b50f-65edfd94d389", 00:10:01.388 "is_configured": true, 00:10:01.388 "data_offset": 2048, 00:10:01.388 "data_size": 63488 00:10:01.388 }, 00:10:01.388 { 00:10:01.388 "name": "BaseBdev2", 00:10:01.388 "uuid": "6de7d38a-923d-5086-ad9e-9f04533bb115", 00:10:01.388 "is_configured": true, 00:10:01.388 "data_offset": 2048, 00:10:01.388 "data_size": 63488 00:10:01.388 }, 00:10:01.388 { 00:10:01.388 "name": "BaseBdev3", 00:10:01.388 "uuid": "79cbdfac-7a91-5211-aee0-47e786fac820", 00:10:01.388 "is_configured": true, 00:10:01.388 "data_offset": 2048, 00:10:01.388 "data_size": 63488 00:10:01.388 } 00:10:01.388 ] 00:10:01.388 }' 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.388 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.956 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:01.956 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.956 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.956 [2024-11-29 07:41:51.666953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.956 [2024-11-29 07:41:51.666984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.956 [2024-11-29 07:41:51.669667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.956 [2024-11-29 07:41:51.669712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.956 [2024-11-29 07:41:51.669747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.956 [2024-11-29 07:41:51.669758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:01.956 { 00:10:01.956 "results": [ 00:10:01.956 { 00:10:01.956 "job": "raid_bdev1", 00:10:01.956 "core_mask": "0x1", 00:10:01.956 "workload": "randrw", 00:10:01.956 "percentage": 50, 00:10:01.956 "status": "finished", 00:10:01.956 "queue_depth": 1, 00:10:01.956 "io_size": 131072, 00:10:01.956 "runtime": 1.372999, 00:10:01.956 "iops": 15978.161673824963, 00:10:01.956 "mibps": 1997.2702092281204, 00:10:01.956 "io_failed": 1, 00:10:01.956 "io_timeout": 0, 00:10:01.956 "avg_latency_us": 86.6825224605501, 00:10:01.956 "min_latency_us": 25.152838427947597, 00:10:01.956 "max_latency_us": 1409.4532751091704 00:10:01.956 } 00:10:01.956 ], 00:10:01.956 "core_count": 1 00:10:01.956 } 00:10:01.956 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.956 07:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66905 00:10:01.956 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66905 ']' 00:10:01.956 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66905 00:10:01.956 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:01.956 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.956 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66905 00:10:01.956 killing process with pid 66905 00:10:01.956 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.956 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.957 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66905' 00:10:01.957 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66905 00:10:01.957 [2024-11-29 07:41:51.714124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.957 07:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66905 00:10:02.216 [2024-11-29 07:41:51.937669] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.151 07:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:03.151 07:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Mv0DGfl5N1 00:10:03.151 07:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:03.151 07:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:03.151 07:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:03.151 07:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.151 07:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:03.151 ************************************ 00:10:03.151 END TEST raid_read_error_test 00:10:03.151 ************************************ 00:10:03.151 07:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:03.151 00:10:03.151 real 0m4.508s 00:10:03.151 user 0m5.396s 00:10:03.151 sys 0m0.547s 00:10:03.151 07:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.151 07:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.410 07:41:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:03.410 07:41:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:03.410 07:41:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.410 07:41:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.410 ************************************ 00:10:03.410 START TEST raid_write_error_test 00:10:03.410 ************************************ 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:03.410 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NWhzSG1tN3 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67053 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67053 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67053 ']' 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.411 07:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.411 [2024-11-29 07:41:53.263495] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:03.411 [2024-11-29 07:41:53.263710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67053 ] 00:10:03.669 [2024-11-29 07:41:53.434818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.669 [2024-11-29 07:41:53.545122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.928 [2024-11-29 07:41:53.733850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.928 [2024-11-29 07:41:53.733986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.187 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.187 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:04.187 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.187 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:04.187 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.187 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.187 BaseBdev1_malloc 00:10:04.187 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.187 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:04.187 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.187 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.447 true 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.447 [2024-11-29 07:41:54.147981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:04.447 [2024-11-29 07:41:54.148038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.447 [2024-11-29 07:41:54.148057] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:04.447 [2024-11-29 07:41:54.148068] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.447 [2024-11-29 07:41:54.150082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.447 [2024-11-29 07:41:54.150132] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:04.447 BaseBdev1 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.447 BaseBdev2_malloc 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.447 true 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.447 [2024-11-29 07:41:54.213061] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:04.447 [2024-11-29 07:41:54.213139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.447 [2024-11-29 07:41:54.213156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:04.447 [2024-11-29 07:41:54.213166] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.447 [2024-11-29 07:41:54.215199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.447 [2024-11-29 07:41:54.215234] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:04.447 BaseBdev2 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.447 BaseBdev3_malloc 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.447 true 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.447 [2024-11-29 07:41:54.299220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:04.447 [2024-11-29 07:41:54.299277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.447 [2024-11-29 07:41:54.299293] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:04.447 [2024-11-29 07:41:54.299303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.447 [2024-11-29 07:41:54.301399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.447 [2024-11-29 07:41:54.301439] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:04.447 BaseBdev3 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.447 [2024-11-29 07:41:54.311288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.447 [2024-11-29 07:41:54.313036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.447 [2024-11-29 07:41:54.313110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.447 [2024-11-29 07:41:54.313323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:04.447 [2024-11-29 07:41:54.313336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:04.447 [2024-11-29 07:41:54.313589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:04.447 [2024-11-29 07:41:54.313751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:04.447 [2024-11-29 07:41:54.313764] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:04.447 [2024-11-29 07:41:54.313916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.447 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.448 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.448 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.448 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.448 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.448 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.448 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.448 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.448 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.448 "name": "raid_bdev1", 00:10:04.448 "uuid": "c92eaf01-b289-4b22-a16d-eb123388fe7e", 00:10:04.448 "strip_size_kb": 64, 00:10:04.448 "state": "online", 00:10:04.448 "raid_level": "concat", 00:10:04.448 "superblock": true, 00:10:04.448 "num_base_bdevs": 3, 00:10:04.448 "num_base_bdevs_discovered": 3, 00:10:04.448 "num_base_bdevs_operational": 3, 00:10:04.448 "base_bdevs_list": [ 00:10:04.448 { 00:10:04.448 "name": "BaseBdev1", 00:10:04.448 "uuid": "65273f32-596e-5ccd-b07d-0da31604465c", 00:10:04.448 "is_configured": true, 00:10:04.448 "data_offset": 2048, 00:10:04.448 "data_size": 63488 00:10:04.448 }, 00:10:04.448 { 00:10:04.448 "name": "BaseBdev2", 00:10:04.448 "uuid": "4103766e-386f-5ef9-9a99-b69060ccb6bd", 00:10:04.448 "is_configured": true, 00:10:04.448 "data_offset": 2048, 00:10:04.448 "data_size": 63488 00:10:04.448 }, 00:10:04.448 { 00:10:04.448 "name": "BaseBdev3", 00:10:04.448 "uuid": "d4e95a6f-6d9b-5d28-ae38-e727c6803788", 00:10:04.448 "is_configured": true, 00:10:04.448 "data_offset": 2048, 00:10:04.448 "data_size": 63488 00:10:04.448 } 00:10:04.448 ] 00:10:04.448 }' 00:10:04.448 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.448 07:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.028 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:05.028 07:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:05.028 [2024-11-29 07:41:54.847658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.961 "name": "raid_bdev1", 00:10:05.961 "uuid": "c92eaf01-b289-4b22-a16d-eb123388fe7e", 00:10:05.961 "strip_size_kb": 64, 00:10:05.961 "state": "online", 00:10:05.961 "raid_level": "concat", 00:10:05.961 "superblock": true, 00:10:05.961 "num_base_bdevs": 3, 00:10:05.961 "num_base_bdevs_discovered": 3, 00:10:05.961 "num_base_bdevs_operational": 3, 00:10:05.961 "base_bdevs_list": [ 00:10:05.961 { 00:10:05.961 "name": "BaseBdev1", 00:10:05.961 "uuid": "65273f32-596e-5ccd-b07d-0da31604465c", 00:10:05.961 "is_configured": true, 00:10:05.961 "data_offset": 2048, 00:10:05.961 "data_size": 63488 00:10:05.961 }, 00:10:05.961 { 00:10:05.961 "name": "BaseBdev2", 00:10:05.961 "uuid": "4103766e-386f-5ef9-9a99-b69060ccb6bd", 00:10:05.961 "is_configured": true, 00:10:05.961 "data_offset": 2048, 00:10:05.961 "data_size": 63488 00:10:05.961 }, 00:10:05.961 { 00:10:05.961 "name": "BaseBdev3", 00:10:05.961 "uuid": "d4e95a6f-6d9b-5d28-ae38-e727c6803788", 00:10:05.961 "is_configured": true, 00:10:05.961 "data_offset": 2048, 00:10:05.961 "data_size": 63488 00:10:05.961 } 00:10:05.961 ] 00:10:05.961 }' 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.961 07:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.528 [2024-11-29 07:41:56.225956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.528 [2024-11-29 07:41:56.226052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.528 [2024-11-29 07:41:56.228899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.528 [2024-11-29 07:41:56.228993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.528 [2024-11-29 07:41:56.229067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.528 [2024-11-29 07:41:56.229139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67053 00:10:06.528 { 00:10:06.528 "results": [ 00:10:06.528 { 00:10:06.528 "job": "raid_bdev1", 00:10:06.528 "core_mask": "0x1", 00:10:06.528 "workload": "randrw", 00:10:06.528 "percentage": 50, 00:10:06.528 "status": "finished", 00:10:06.528 "queue_depth": 1, 00:10:06.528 "io_size": 131072, 00:10:06.528 "runtime": 1.379331, 00:10:06.528 "iops": 15767.788877361561, 00:10:06.528 "mibps": 1970.9736096701952, 00:10:06.528 "io_failed": 1, 00:10:06.528 "io_timeout": 0, 00:10:06.528 "avg_latency_us": 87.87608420418611, 00:10:06.528 "min_latency_us": 24.817467248908297, 00:10:06.528 "max_latency_us": 1359.3711790393013 00:10:06.528 } 00:10:06.528 ], 00:10:06.528 "core_count": 1 00:10:06.528 } 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67053 ']' 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67053 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67053 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67053' 00:10:06.528 killing process with pid 67053 00:10:06.528 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67053 00:10:06.528 [2024-11-29 07:41:56.262179] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.529 07:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67053 00:10:06.787 [2024-11-29 07:41:56.484909] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.719 07:41:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:07.720 07:41:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NWhzSG1tN3 00:10:07.720 07:41:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:07.720 07:41:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:07.720 07:41:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:07.720 07:41:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.720 07:41:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:07.720 07:41:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:07.720 ************************************ 00:10:07.720 END TEST raid_write_error_test 00:10:07.720 ************************************ 00:10:07.720 00:10:07.720 real 0m4.490s 00:10:07.720 user 0m5.329s 00:10:07.720 sys 0m0.535s 00:10:07.720 07:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.720 07:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.976 07:41:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:07.976 07:41:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:07.976 07:41:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:07.976 07:41:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.976 07:41:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.976 ************************************ 00:10:07.976 START TEST raid_state_function_test 00:10:07.976 ************************************ 00:10:07.976 07:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67191 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67191' 00:10:07.977 Process raid pid: 67191 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67191 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67191 ']' 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.977 07:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.977 [2024-11-29 07:41:57.815904] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:07.977 [2024-11-29 07:41:57.816109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.235 [2024-11-29 07:41:57.986726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.235 [2024-11-29 07:41:58.100992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.494 [2024-11-29 07:41:58.296056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.494 [2024-11-29 07:41:58.296189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.752 [2024-11-29 07:41:58.657460] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.752 [2024-11-29 07:41:58.657521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.752 [2024-11-29 07:41:58.657532] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.752 [2024-11-29 07:41:58.657542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.752 [2024-11-29 07:41:58.657548] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.752 [2024-11-29 07:41:58.657557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.752 07:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.011 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.011 "name": "Existed_Raid", 00:10:09.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.011 "strip_size_kb": 0, 00:10:09.011 "state": "configuring", 00:10:09.011 "raid_level": "raid1", 00:10:09.011 "superblock": false, 00:10:09.011 "num_base_bdevs": 3, 00:10:09.011 "num_base_bdevs_discovered": 0, 00:10:09.011 "num_base_bdevs_operational": 3, 00:10:09.011 "base_bdevs_list": [ 00:10:09.011 { 00:10:09.011 "name": "BaseBdev1", 00:10:09.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.011 "is_configured": false, 00:10:09.011 "data_offset": 0, 00:10:09.011 "data_size": 0 00:10:09.011 }, 00:10:09.011 { 00:10:09.011 "name": "BaseBdev2", 00:10:09.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.011 "is_configured": false, 00:10:09.011 "data_offset": 0, 00:10:09.011 "data_size": 0 00:10:09.011 }, 00:10:09.011 { 00:10:09.011 "name": "BaseBdev3", 00:10:09.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.011 "is_configured": false, 00:10:09.011 "data_offset": 0, 00:10:09.011 "data_size": 0 00:10:09.011 } 00:10:09.011 ] 00:10:09.011 }' 00:10:09.011 07:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.011 07:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.269 [2024-11-29 07:41:59.100658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.269 [2024-11-29 07:41:59.100739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.269 [2024-11-29 07:41:59.108645] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.269 [2024-11-29 07:41:59.108730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.269 [2024-11-29 07:41:59.108759] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.269 [2024-11-29 07:41:59.108781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.269 [2024-11-29 07:41:59.108798] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.269 [2024-11-29 07:41:59.108819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.269 [2024-11-29 07:41:59.150906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.269 BaseBdev1 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.269 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.270 [ 00:10:09.270 { 00:10:09.270 "name": "BaseBdev1", 00:10:09.270 "aliases": [ 00:10:09.270 "592c97fd-676a-4bd0-b9de-413a70df839e" 00:10:09.270 ], 00:10:09.270 "product_name": "Malloc disk", 00:10:09.270 "block_size": 512, 00:10:09.270 "num_blocks": 65536, 00:10:09.270 "uuid": "592c97fd-676a-4bd0-b9de-413a70df839e", 00:10:09.270 "assigned_rate_limits": { 00:10:09.270 "rw_ios_per_sec": 0, 00:10:09.270 "rw_mbytes_per_sec": 0, 00:10:09.270 "r_mbytes_per_sec": 0, 00:10:09.270 "w_mbytes_per_sec": 0 00:10:09.270 }, 00:10:09.270 "claimed": true, 00:10:09.270 "claim_type": "exclusive_write", 00:10:09.270 "zoned": false, 00:10:09.270 "supported_io_types": { 00:10:09.270 "read": true, 00:10:09.270 "write": true, 00:10:09.270 "unmap": true, 00:10:09.270 "flush": true, 00:10:09.270 "reset": true, 00:10:09.270 "nvme_admin": false, 00:10:09.270 "nvme_io": false, 00:10:09.270 "nvme_io_md": false, 00:10:09.270 "write_zeroes": true, 00:10:09.270 "zcopy": true, 00:10:09.270 "get_zone_info": false, 00:10:09.270 "zone_management": false, 00:10:09.270 "zone_append": false, 00:10:09.270 "compare": false, 00:10:09.270 "compare_and_write": false, 00:10:09.270 "abort": true, 00:10:09.270 "seek_hole": false, 00:10:09.270 "seek_data": false, 00:10:09.270 "copy": true, 00:10:09.270 "nvme_iov_md": false 00:10:09.270 }, 00:10:09.270 "memory_domains": [ 00:10:09.270 { 00:10:09.270 "dma_device_id": "system", 00:10:09.270 "dma_device_type": 1 00:10:09.270 }, 00:10:09.270 { 00:10:09.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.270 "dma_device_type": 2 00:10:09.270 } 00:10:09.270 ], 00:10:09.270 "driver_specific": {} 00:10:09.270 } 00:10:09.270 ] 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.270 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.529 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.529 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.529 "name": "Existed_Raid", 00:10:09.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.529 "strip_size_kb": 0, 00:10:09.529 "state": "configuring", 00:10:09.529 "raid_level": "raid1", 00:10:09.529 "superblock": false, 00:10:09.529 "num_base_bdevs": 3, 00:10:09.529 "num_base_bdevs_discovered": 1, 00:10:09.529 "num_base_bdevs_operational": 3, 00:10:09.529 "base_bdevs_list": [ 00:10:09.529 { 00:10:09.529 "name": "BaseBdev1", 00:10:09.529 "uuid": "592c97fd-676a-4bd0-b9de-413a70df839e", 00:10:09.529 "is_configured": true, 00:10:09.529 "data_offset": 0, 00:10:09.529 "data_size": 65536 00:10:09.529 }, 00:10:09.529 { 00:10:09.529 "name": "BaseBdev2", 00:10:09.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.529 "is_configured": false, 00:10:09.529 "data_offset": 0, 00:10:09.529 "data_size": 0 00:10:09.529 }, 00:10:09.529 { 00:10:09.529 "name": "BaseBdev3", 00:10:09.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.529 "is_configured": false, 00:10:09.529 "data_offset": 0, 00:10:09.529 "data_size": 0 00:10:09.529 } 00:10:09.529 ] 00:10:09.529 }' 00:10:09.529 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.529 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.788 [2024-11-29 07:41:59.646121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.788 [2024-11-29 07:41:59.646174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.788 [2024-11-29 07:41:59.658143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.788 [2024-11-29 07:41:59.659866] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.788 [2024-11-29 07:41:59.659912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.788 [2024-11-29 07:41:59.659922] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.788 [2024-11-29 07:41:59.659947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.788 "name": "Existed_Raid", 00:10:09.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.788 "strip_size_kb": 0, 00:10:09.788 "state": "configuring", 00:10:09.788 "raid_level": "raid1", 00:10:09.788 "superblock": false, 00:10:09.788 "num_base_bdevs": 3, 00:10:09.788 "num_base_bdevs_discovered": 1, 00:10:09.788 "num_base_bdevs_operational": 3, 00:10:09.788 "base_bdevs_list": [ 00:10:09.788 { 00:10:09.788 "name": "BaseBdev1", 00:10:09.788 "uuid": "592c97fd-676a-4bd0-b9de-413a70df839e", 00:10:09.788 "is_configured": true, 00:10:09.788 "data_offset": 0, 00:10:09.788 "data_size": 65536 00:10:09.788 }, 00:10:09.788 { 00:10:09.788 "name": "BaseBdev2", 00:10:09.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.788 "is_configured": false, 00:10:09.788 "data_offset": 0, 00:10:09.788 "data_size": 0 00:10:09.788 }, 00:10:09.788 { 00:10:09.788 "name": "BaseBdev3", 00:10:09.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.788 "is_configured": false, 00:10:09.788 "data_offset": 0, 00:10:09.788 "data_size": 0 00:10:09.788 } 00:10:09.788 ] 00:10:09.788 }' 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.788 07:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.355 [2024-11-29 07:42:00.154427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.355 BaseBdev2 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.355 [ 00:10:10.355 { 00:10:10.355 "name": "BaseBdev2", 00:10:10.355 "aliases": [ 00:10:10.355 "cb46cc66-c99e-4564-b4c8-debae818c209" 00:10:10.355 ], 00:10:10.355 "product_name": "Malloc disk", 00:10:10.355 "block_size": 512, 00:10:10.355 "num_blocks": 65536, 00:10:10.355 "uuid": "cb46cc66-c99e-4564-b4c8-debae818c209", 00:10:10.355 "assigned_rate_limits": { 00:10:10.355 "rw_ios_per_sec": 0, 00:10:10.355 "rw_mbytes_per_sec": 0, 00:10:10.355 "r_mbytes_per_sec": 0, 00:10:10.355 "w_mbytes_per_sec": 0 00:10:10.355 }, 00:10:10.355 "claimed": true, 00:10:10.355 "claim_type": "exclusive_write", 00:10:10.355 "zoned": false, 00:10:10.355 "supported_io_types": { 00:10:10.355 "read": true, 00:10:10.355 "write": true, 00:10:10.355 "unmap": true, 00:10:10.355 "flush": true, 00:10:10.355 "reset": true, 00:10:10.355 "nvme_admin": false, 00:10:10.355 "nvme_io": false, 00:10:10.355 "nvme_io_md": false, 00:10:10.355 "write_zeroes": true, 00:10:10.355 "zcopy": true, 00:10:10.355 "get_zone_info": false, 00:10:10.355 "zone_management": false, 00:10:10.355 "zone_append": false, 00:10:10.355 "compare": false, 00:10:10.355 "compare_and_write": false, 00:10:10.355 "abort": true, 00:10:10.355 "seek_hole": false, 00:10:10.355 "seek_data": false, 00:10:10.355 "copy": true, 00:10:10.355 "nvme_iov_md": false 00:10:10.355 }, 00:10:10.355 "memory_domains": [ 00:10:10.355 { 00:10:10.355 "dma_device_id": "system", 00:10:10.355 "dma_device_type": 1 00:10:10.355 }, 00:10:10.355 { 00:10:10.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.355 "dma_device_type": 2 00:10:10.355 } 00:10:10.355 ], 00:10:10.355 "driver_specific": {} 00:10:10.355 } 00:10:10.355 ] 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.355 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.356 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.356 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.356 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.356 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.356 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.356 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.356 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.356 "name": "Existed_Raid", 00:10:10.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.356 "strip_size_kb": 0, 00:10:10.356 "state": "configuring", 00:10:10.356 "raid_level": "raid1", 00:10:10.356 "superblock": false, 00:10:10.356 "num_base_bdevs": 3, 00:10:10.356 "num_base_bdevs_discovered": 2, 00:10:10.356 "num_base_bdevs_operational": 3, 00:10:10.356 "base_bdevs_list": [ 00:10:10.356 { 00:10:10.356 "name": "BaseBdev1", 00:10:10.356 "uuid": "592c97fd-676a-4bd0-b9de-413a70df839e", 00:10:10.356 "is_configured": true, 00:10:10.356 "data_offset": 0, 00:10:10.356 "data_size": 65536 00:10:10.356 }, 00:10:10.356 { 00:10:10.356 "name": "BaseBdev2", 00:10:10.356 "uuid": "cb46cc66-c99e-4564-b4c8-debae818c209", 00:10:10.356 "is_configured": true, 00:10:10.356 "data_offset": 0, 00:10:10.356 "data_size": 65536 00:10:10.356 }, 00:10:10.356 { 00:10:10.356 "name": "BaseBdev3", 00:10:10.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.356 "is_configured": false, 00:10:10.356 "data_offset": 0, 00:10:10.356 "data_size": 0 00:10:10.356 } 00:10:10.356 ] 00:10:10.356 }' 00:10:10.356 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.356 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.925 [2024-11-29 07:42:00.662480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.925 [2024-11-29 07:42:00.662534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:10.925 [2024-11-29 07:42:00.662546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:10.925 [2024-11-29 07:42:00.662799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:10.925 [2024-11-29 07:42:00.662963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:10.925 [2024-11-29 07:42:00.662971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:10.925 [2024-11-29 07:42:00.663319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.925 BaseBdev3 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.925 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.925 [ 00:10:10.925 { 00:10:10.925 "name": "BaseBdev3", 00:10:10.925 "aliases": [ 00:10:10.925 "026eed0f-cd3d-4d63-9a3a-a9088285248d" 00:10:10.925 ], 00:10:10.925 "product_name": "Malloc disk", 00:10:10.925 "block_size": 512, 00:10:10.925 "num_blocks": 65536, 00:10:10.925 "uuid": "026eed0f-cd3d-4d63-9a3a-a9088285248d", 00:10:10.925 "assigned_rate_limits": { 00:10:10.925 "rw_ios_per_sec": 0, 00:10:10.925 "rw_mbytes_per_sec": 0, 00:10:10.925 "r_mbytes_per_sec": 0, 00:10:10.925 "w_mbytes_per_sec": 0 00:10:10.925 }, 00:10:10.925 "claimed": true, 00:10:10.925 "claim_type": "exclusive_write", 00:10:10.925 "zoned": false, 00:10:10.925 "supported_io_types": { 00:10:10.926 "read": true, 00:10:10.926 "write": true, 00:10:10.926 "unmap": true, 00:10:10.926 "flush": true, 00:10:10.926 "reset": true, 00:10:10.926 "nvme_admin": false, 00:10:10.926 "nvme_io": false, 00:10:10.926 "nvme_io_md": false, 00:10:10.926 "write_zeroes": true, 00:10:10.926 "zcopy": true, 00:10:10.926 "get_zone_info": false, 00:10:10.926 "zone_management": false, 00:10:10.926 "zone_append": false, 00:10:10.926 "compare": false, 00:10:10.926 "compare_and_write": false, 00:10:10.926 "abort": true, 00:10:10.926 "seek_hole": false, 00:10:10.926 "seek_data": false, 00:10:10.926 "copy": true, 00:10:10.926 "nvme_iov_md": false 00:10:10.926 }, 00:10:10.926 "memory_domains": [ 00:10:10.926 { 00:10:10.926 "dma_device_id": "system", 00:10:10.926 "dma_device_type": 1 00:10:10.926 }, 00:10:10.926 { 00:10:10.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.926 "dma_device_type": 2 00:10:10.926 } 00:10:10.926 ], 00:10:10.926 "driver_specific": {} 00:10:10.926 } 00:10:10.926 ] 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.926 "name": "Existed_Raid", 00:10:10.926 "uuid": "7d571e17-d8ce-4767-bca3-436c91789639", 00:10:10.926 "strip_size_kb": 0, 00:10:10.926 "state": "online", 00:10:10.926 "raid_level": "raid1", 00:10:10.926 "superblock": false, 00:10:10.926 "num_base_bdevs": 3, 00:10:10.926 "num_base_bdevs_discovered": 3, 00:10:10.926 "num_base_bdevs_operational": 3, 00:10:10.926 "base_bdevs_list": [ 00:10:10.926 { 00:10:10.926 "name": "BaseBdev1", 00:10:10.926 "uuid": "592c97fd-676a-4bd0-b9de-413a70df839e", 00:10:10.926 "is_configured": true, 00:10:10.926 "data_offset": 0, 00:10:10.926 "data_size": 65536 00:10:10.926 }, 00:10:10.926 { 00:10:10.926 "name": "BaseBdev2", 00:10:10.926 "uuid": "cb46cc66-c99e-4564-b4c8-debae818c209", 00:10:10.926 "is_configured": true, 00:10:10.926 "data_offset": 0, 00:10:10.926 "data_size": 65536 00:10:10.926 }, 00:10:10.926 { 00:10:10.926 "name": "BaseBdev3", 00:10:10.926 "uuid": "026eed0f-cd3d-4d63-9a3a-a9088285248d", 00:10:10.926 "is_configured": true, 00:10:10.926 "data_offset": 0, 00:10:10.926 "data_size": 65536 00:10:10.926 } 00:10:10.926 ] 00:10:10.926 }' 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.926 07:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.185 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.185 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.185 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.185 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.185 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.185 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.185 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.185 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.185 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.185 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.185 [2024-11-29 07:42:01.114054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.185 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.445 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.445 "name": "Existed_Raid", 00:10:11.445 "aliases": [ 00:10:11.445 "7d571e17-d8ce-4767-bca3-436c91789639" 00:10:11.445 ], 00:10:11.445 "product_name": "Raid Volume", 00:10:11.445 "block_size": 512, 00:10:11.445 "num_blocks": 65536, 00:10:11.445 "uuid": "7d571e17-d8ce-4767-bca3-436c91789639", 00:10:11.445 "assigned_rate_limits": { 00:10:11.445 "rw_ios_per_sec": 0, 00:10:11.445 "rw_mbytes_per_sec": 0, 00:10:11.445 "r_mbytes_per_sec": 0, 00:10:11.445 "w_mbytes_per_sec": 0 00:10:11.445 }, 00:10:11.445 "claimed": false, 00:10:11.445 "zoned": false, 00:10:11.445 "supported_io_types": { 00:10:11.445 "read": true, 00:10:11.445 "write": true, 00:10:11.445 "unmap": false, 00:10:11.445 "flush": false, 00:10:11.445 "reset": true, 00:10:11.445 "nvme_admin": false, 00:10:11.445 "nvme_io": false, 00:10:11.445 "nvme_io_md": false, 00:10:11.445 "write_zeroes": true, 00:10:11.445 "zcopy": false, 00:10:11.445 "get_zone_info": false, 00:10:11.445 "zone_management": false, 00:10:11.445 "zone_append": false, 00:10:11.445 "compare": false, 00:10:11.445 "compare_and_write": false, 00:10:11.445 "abort": false, 00:10:11.445 "seek_hole": false, 00:10:11.445 "seek_data": false, 00:10:11.445 "copy": false, 00:10:11.445 "nvme_iov_md": false 00:10:11.445 }, 00:10:11.445 "memory_domains": [ 00:10:11.445 { 00:10:11.445 "dma_device_id": "system", 00:10:11.445 "dma_device_type": 1 00:10:11.445 }, 00:10:11.445 { 00:10:11.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.445 "dma_device_type": 2 00:10:11.445 }, 00:10:11.445 { 00:10:11.445 "dma_device_id": "system", 00:10:11.445 "dma_device_type": 1 00:10:11.445 }, 00:10:11.445 { 00:10:11.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.445 "dma_device_type": 2 00:10:11.445 }, 00:10:11.445 { 00:10:11.445 "dma_device_id": "system", 00:10:11.445 "dma_device_type": 1 00:10:11.445 }, 00:10:11.445 { 00:10:11.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.445 "dma_device_type": 2 00:10:11.445 } 00:10:11.445 ], 00:10:11.445 "driver_specific": { 00:10:11.445 "raid": { 00:10:11.445 "uuid": "7d571e17-d8ce-4767-bca3-436c91789639", 00:10:11.445 "strip_size_kb": 0, 00:10:11.445 "state": "online", 00:10:11.445 "raid_level": "raid1", 00:10:11.445 "superblock": false, 00:10:11.445 "num_base_bdevs": 3, 00:10:11.445 "num_base_bdevs_discovered": 3, 00:10:11.445 "num_base_bdevs_operational": 3, 00:10:11.445 "base_bdevs_list": [ 00:10:11.445 { 00:10:11.445 "name": "BaseBdev1", 00:10:11.445 "uuid": "592c97fd-676a-4bd0-b9de-413a70df839e", 00:10:11.445 "is_configured": true, 00:10:11.445 "data_offset": 0, 00:10:11.445 "data_size": 65536 00:10:11.445 }, 00:10:11.445 { 00:10:11.445 "name": "BaseBdev2", 00:10:11.445 "uuid": "cb46cc66-c99e-4564-b4c8-debae818c209", 00:10:11.445 "is_configured": true, 00:10:11.445 "data_offset": 0, 00:10:11.445 "data_size": 65536 00:10:11.445 }, 00:10:11.445 { 00:10:11.445 "name": "BaseBdev3", 00:10:11.445 "uuid": "026eed0f-cd3d-4d63-9a3a-a9088285248d", 00:10:11.445 "is_configured": true, 00:10:11.445 "data_offset": 0, 00:10:11.445 "data_size": 65536 00:10:11.445 } 00:10:11.445 ] 00:10:11.445 } 00:10:11.445 } 00:10:11.445 }' 00:10:11.445 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.445 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:11.445 BaseBdev2 00:10:11.445 BaseBdev3' 00:10:11.445 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.445 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.445 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.445 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:11.445 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.445 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.445 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.445 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.445 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.446 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.446 [2024-11-29 07:42:01.373319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.705 "name": "Existed_Raid", 00:10:11.705 "uuid": "7d571e17-d8ce-4767-bca3-436c91789639", 00:10:11.705 "strip_size_kb": 0, 00:10:11.705 "state": "online", 00:10:11.705 "raid_level": "raid1", 00:10:11.705 "superblock": false, 00:10:11.705 "num_base_bdevs": 3, 00:10:11.705 "num_base_bdevs_discovered": 2, 00:10:11.705 "num_base_bdevs_operational": 2, 00:10:11.705 "base_bdevs_list": [ 00:10:11.705 { 00:10:11.705 "name": null, 00:10:11.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.705 "is_configured": false, 00:10:11.705 "data_offset": 0, 00:10:11.705 "data_size": 65536 00:10:11.705 }, 00:10:11.705 { 00:10:11.705 "name": "BaseBdev2", 00:10:11.705 "uuid": "cb46cc66-c99e-4564-b4c8-debae818c209", 00:10:11.705 "is_configured": true, 00:10:11.705 "data_offset": 0, 00:10:11.705 "data_size": 65536 00:10:11.705 }, 00:10:11.705 { 00:10:11.705 "name": "BaseBdev3", 00:10:11.705 "uuid": "026eed0f-cd3d-4d63-9a3a-a9088285248d", 00:10:11.705 "is_configured": true, 00:10:11.705 "data_offset": 0, 00:10:11.705 "data_size": 65536 00:10:11.705 } 00:10:11.705 ] 00:10:11.705 }' 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.705 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.964 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:11.964 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.964 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.964 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.965 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.965 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.965 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.223 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.223 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.223 07:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:12.223 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.223 07:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.223 [2024-11-29 07:42:01.932769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.223 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.223 [2024-11-29 07:42:02.081406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:12.223 [2024-11-29 07:42:02.081508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.480 [2024-11-29 07:42:02.177437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.480 [2024-11-29 07:42:02.177536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.480 [2024-11-29 07:42:02.177578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.480 BaseBdev2 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:12.480 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.481 [ 00:10:12.481 { 00:10:12.481 "name": "BaseBdev2", 00:10:12.481 "aliases": [ 00:10:12.481 "32d9295a-c006-4d95-b2b5-1629aab92b54" 00:10:12.481 ], 00:10:12.481 "product_name": "Malloc disk", 00:10:12.481 "block_size": 512, 00:10:12.481 "num_blocks": 65536, 00:10:12.481 "uuid": "32d9295a-c006-4d95-b2b5-1629aab92b54", 00:10:12.481 "assigned_rate_limits": { 00:10:12.481 "rw_ios_per_sec": 0, 00:10:12.481 "rw_mbytes_per_sec": 0, 00:10:12.481 "r_mbytes_per_sec": 0, 00:10:12.481 "w_mbytes_per_sec": 0 00:10:12.481 }, 00:10:12.481 "claimed": false, 00:10:12.481 "zoned": false, 00:10:12.481 "supported_io_types": { 00:10:12.481 "read": true, 00:10:12.481 "write": true, 00:10:12.481 "unmap": true, 00:10:12.481 "flush": true, 00:10:12.481 "reset": true, 00:10:12.481 "nvme_admin": false, 00:10:12.481 "nvme_io": false, 00:10:12.481 "nvme_io_md": false, 00:10:12.481 "write_zeroes": true, 00:10:12.481 "zcopy": true, 00:10:12.481 "get_zone_info": false, 00:10:12.481 "zone_management": false, 00:10:12.481 "zone_append": false, 00:10:12.481 "compare": false, 00:10:12.481 "compare_and_write": false, 00:10:12.481 "abort": true, 00:10:12.481 "seek_hole": false, 00:10:12.481 "seek_data": false, 00:10:12.481 "copy": true, 00:10:12.481 "nvme_iov_md": false 00:10:12.481 }, 00:10:12.481 "memory_domains": [ 00:10:12.481 { 00:10:12.481 "dma_device_id": "system", 00:10:12.481 "dma_device_type": 1 00:10:12.481 }, 00:10:12.481 { 00:10:12.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.481 "dma_device_type": 2 00:10:12.481 } 00:10:12.481 ], 00:10:12.481 "driver_specific": {} 00:10:12.481 } 00:10:12.481 ] 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.481 BaseBdev3 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.481 [ 00:10:12.481 { 00:10:12.481 "name": "BaseBdev3", 00:10:12.481 "aliases": [ 00:10:12.481 "da5717a2-83f0-4562-921e-9bfff466e42d" 00:10:12.481 ], 00:10:12.481 "product_name": "Malloc disk", 00:10:12.481 "block_size": 512, 00:10:12.481 "num_blocks": 65536, 00:10:12.481 "uuid": "da5717a2-83f0-4562-921e-9bfff466e42d", 00:10:12.481 "assigned_rate_limits": { 00:10:12.481 "rw_ios_per_sec": 0, 00:10:12.481 "rw_mbytes_per_sec": 0, 00:10:12.481 "r_mbytes_per_sec": 0, 00:10:12.481 "w_mbytes_per_sec": 0 00:10:12.481 }, 00:10:12.481 "claimed": false, 00:10:12.481 "zoned": false, 00:10:12.481 "supported_io_types": { 00:10:12.481 "read": true, 00:10:12.481 "write": true, 00:10:12.481 "unmap": true, 00:10:12.481 "flush": true, 00:10:12.481 "reset": true, 00:10:12.481 "nvme_admin": false, 00:10:12.481 "nvme_io": false, 00:10:12.481 "nvme_io_md": false, 00:10:12.481 "write_zeroes": true, 00:10:12.481 "zcopy": true, 00:10:12.481 "get_zone_info": false, 00:10:12.481 "zone_management": false, 00:10:12.481 "zone_append": false, 00:10:12.481 "compare": false, 00:10:12.481 "compare_and_write": false, 00:10:12.481 "abort": true, 00:10:12.481 "seek_hole": false, 00:10:12.481 "seek_data": false, 00:10:12.481 "copy": true, 00:10:12.481 "nvme_iov_md": false 00:10:12.481 }, 00:10:12.481 "memory_domains": [ 00:10:12.481 { 00:10:12.481 "dma_device_id": "system", 00:10:12.481 "dma_device_type": 1 00:10:12.481 }, 00:10:12.481 { 00:10:12.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.481 "dma_device_type": 2 00:10:12.481 } 00:10:12.481 ], 00:10:12.481 "driver_specific": {} 00:10:12.481 } 00:10:12.481 ] 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.481 [2024-11-29 07:42:02.402770] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.481 [2024-11-29 07:42:02.402832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.481 [2024-11-29 07:42:02.402850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.481 [2024-11-29 07:42:02.404629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.481 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.748 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.748 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.748 "name": "Existed_Raid", 00:10:12.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.748 "strip_size_kb": 0, 00:10:12.748 "state": "configuring", 00:10:12.748 "raid_level": "raid1", 00:10:12.748 "superblock": false, 00:10:12.748 "num_base_bdevs": 3, 00:10:12.748 "num_base_bdevs_discovered": 2, 00:10:12.748 "num_base_bdevs_operational": 3, 00:10:12.748 "base_bdevs_list": [ 00:10:12.748 { 00:10:12.748 "name": "BaseBdev1", 00:10:12.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.748 "is_configured": false, 00:10:12.748 "data_offset": 0, 00:10:12.748 "data_size": 0 00:10:12.748 }, 00:10:12.748 { 00:10:12.748 "name": "BaseBdev2", 00:10:12.748 "uuid": "32d9295a-c006-4d95-b2b5-1629aab92b54", 00:10:12.748 "is_configured": true, 00:10:12.748 "data_offset": 0, 00:10:12.748 "data_size": 65536 00:10:12.748 }, 00:10:12.748 { 00:10:12.748 "name": "BaseBdev3", 00:10:12.748 "uuid": "da5717a2-83f0-4562-921e-9bfff466e42d", 00:10:12.748 "is_configured": true, 00:10:12.748 "data_offset": 0, 00:10:12.748 "data_size": 65536 00:10:12.748 } 00:10:12.748 ] 00:10:12.748 }' 00:10:12.748 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.748 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.026 [2024-11-29 07:42:02.862026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.026 "name": "Existed_Raid", 00:10:13.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.026 "strip_size_kb": 0, 00:10:13.026 "state": "configuring", 00:10:13.026 "raid_level": "raid1", 00:10:13.026 "superblock": false, 00:10:13.026 "num_base_bdevs": 3, 00:10:13.026 "num_base_bdevs_discovered": 1, 00:10:13.026 "num_base_bdevs_operational": 3, 00:10:13.026 "base_bdevs_list": [ 00:10:13.026 { 00:10:13.026 "name": "BaseBdev1", 00:10:13.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.026 "is_configured": false, 00:10:13.026 "data_offset": 0, 00:10:13.026 "data_size": 0 00:10:13.026 }, 00:10:13.026 { 00:10:13.026 "name": null, 00:10:13.026 "uuid": "32d9295a-c006-4d95-b2b5-1629aab92b54", 00:10:13.026 "is_configured": false, 00:10:13.026 "data_offset": 0, 00:10:13.026 "data_size": 65536 00:10:13.026 }, 00:10:13.026 { 00:10:13.026 "name": "BaseBdev3", 00:10:13.026 "uuid": "da5717a2-83f0-4562-921e-9bfff466e42d", 00:10:13.026 "is_configured": true, 00:10:13.026 "data_offset": 0, 00:10:13.026 "data_size": 65536 00:10:13.026 } 00:10:13.026 ] 00:10:13.026 }' 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.026 07:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.593 [2024-11-29 07:42:03.365666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.593 BaseBdev1 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.593 [ 00:10:13.593 { 00:10:13.593 "name": "BaseBdev1", 00:10:13.593 "aliases": [ 00:10:13.593 "4e16425e-6d26-4c91-b703-131fb460f783" 00:10:13.593 ], 00:10:13.593 "product_name": "Malloc disk", 00:10:13.593 "block_size": 512, 00:10:13.593 "num_blocks": 65536, 00:10:13.593 "uuid": "4e16425e-6d26-4c91-b703-131fb460f783", 00:10:13.593 "assigned_rate_limits": { 00:10:13.593 "rw_ios_per_sec": 0, 00:10:13.593 "rw_mbytes_per_sec": 0, 00:10:13.593 "r_mbytes_per_sec": 0, 00:10:13.593 "w_mbytes_per_sec": 0 00:10:13.593 }, 00:10:13.593 "claimed": true, 00:10:13.593 "claim_type": "exclusive_write", 00:10:13.593 "zoned": false, 00:10:13.593 "supported_io_types": { 00:10:13.593 "read": true, 00:10:13.593 "write": true, 00:10:13.593 "unmap": true, 00:10:13.593 "flush": true, 00:10:13.593 "reset": true, 00:10:13.593 "nvme_admin": false, 00:10:13.593 "nvme_io": false, 00:10:13.593 "nvme_io_md": false, 00:10:13.593 "write_zeroes": true, 00:10:13.593 "zcopy": true, 00:10:13.593 "get_zone_info": false, 00:10:13.593 "zone_management": false, 00:10:13.593 "zone_append": false, 00:10:13.593 "compare": false, 00:10:13.593 "compare_and_write": false, 00:10:13.593 "abort": true, 00:10:13.593 "seek_hole": false, 00:10:13.593 "seek_data": false, 00:10:13.593 "copy": true, 00:10:13.593 "nvme_iov_md": false 00:10:13.593 }, 00:10:13.593 "memory_domains": [ 00:10:13.593 { 00:10:13.593 "dma_device_id": "system", 00:10:13.593 "dma_device_type": 1 00:10:13.593 }, 00:10:13.593 { 00:10:13.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.593 "dma_device_type": 2 00:10:13.593 } 00:10:13.593 ], 00:10:13.593 "driver_specific": {} 00:10:13.593 } 00:10:13.593 ] 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.593 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.594 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.594 "name": "Existed_Raid", 00:10:13.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.594 "strip_size_kb": 0, 00:10:13.594 "state": "configuring", 00:10:13.594 "raid_level": "raid1", 00:10:13.594 "superblock": false, 00:10:13.594 "num_base_bdevs": 3, 00:10:13.594 "num_base_bdevs_discovered": 2, 00:10:13.594 "num_base_bdevs_operational": 3, 00:10:13.594 "base_bdevs_list": [ 00:10:13.594 { 00:10:13.594 "name": "BaseBdev1", 00:10:13.594 "uuid": "4e16425e-6d26-4c91-b703-131fb460f783", 00:10:13.594 "is_configured": true, 00:10:13.594 "data_offset": 0, 00:10:13.594 "data_size": 65536 00:10:13.594 }, 00:10:13.594 { 00:10:13.594 "name": null, 00:10:13.594 "uuid": "32d9295a-c006-4d95-b2b5-1629aab92b54", 00:10:13.594 "is_configured": false, 00:10:13.594 "data_offset": 0, 00:10:13.594 "data_size": 65536 00:10:13.594 }, 00:10:13.594 { 00:10:13.594 "name": "BaseBdev3", 00:10:13.594 "uuid": "da5717a2-83f0-4562-921e-9bfff466e42d", 00:10:13.594 "is_configured": true, 00:10:13.594 "data_offset": 0, 00:10:13.594 "data_size": 65536 00:10:13.594 } 00:10:13.594 ] 00:10:13.594 }' 00:10:13.594 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.594 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.851 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.851 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.851 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.851 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.110 [2024-11-29 07:42:03.836929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.110 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.111 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.111 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.111 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.111 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.111 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.111 "name": "Existed_Raid", 00:10:14.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.111 "strip_size_kb": 0, 00:10:14.111 "state": "configuring", 00:10:14.111 "raid_level": "raid1", 00:10:14.111 "superblock": false, 00:10:14.111 "num_base_bdevs": 3, 00:10:14.111 "num_base_bdevs_discovered": 1, 00:10:14.111 "num_base_bdevs_operational": 3, 00:10:14.111 "base_bdevs_list": [ 00:10:14.111 { 00:10:14.111 "name": "BaseBdev1", 00:10:14.111 "uuid": "4e16425e-6d26-4c91-b703-131fb460f783", 00:10:14.111 "is_configured": true, 00:10:14.111 "data_offset": 0, 00:10:14.111 "data_size": 65536 00:10:14.111 }, 00:10:14.111 { 00:10:14.111 "name": null, 00:10:14.111 "uuid": "32d9295a-c006-4d95-b2b5-1629aab92b54", 00:10:14.111 "is_configured": false, 00:10:14.111 "data_offset": 0, 00:10:14.111 "data_size": 65536 00:10:14.111 }, 00:10:14.111 { 00:10:14.111 "name": null, 00:10:14.111 "uuid": "da5717a2-83f0-4562-921e-9bfff466e42d", 00:10:14.111 "is_configured": false, 00:10:14.111 "data_offset": 0, 00:10:14.111 "data_size": 65536 00:10:14.111 } 00:10:14.111 ] 00:10:14.111 }' 00:10:14.111 07:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.111 07:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.370 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.370 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.370 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.370 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.370 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.370 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:14.370 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:14.370 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.370 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.370 [2024-11-29 07:42:04.312160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.630 "name": "Existed_Raid", 00:10:14.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.630 "strip_size_kb": 0, 00:10:14.630 "state": "configuring", 00:10:14.630 "raid_level": "raid1", 00:10:14.630 "superblock": false, 00:10:14.630 "num_base_bdevs": 3, 00:10:14.630 "num_base_bdevs_discovered": 2, 00:10:14.630 "num_base_bdevs_operational": 3, 00:10:14.630 "base_bdevs_list": [ 00:10:14.630 { 00:10:14.630 "name": "BaseBdev1", 00:10:14.630 "uuid": "4e16425e-6d26-4c91-b703-131fb460f783", 00:10:14.630 "is_configured": true, 00:10:14.630 "data_offset": 0, 00:10:14.630 "data_size": 65536 00:10:14.630 }, 00:10:14.630 { 00:10:14.630 "name": null, 00:10:14.630 "uuid": "32d9295a-c006-4d95-b2b5-1629aab92b54", 00:10:14.630 "is_configured": false, 00:10:14.630 "data_offset": 0, 00:10:14.630 "data_size": 65536 00:10:14.630 }, 00:10:14.630 { 00:10:14.630 "name": "BaseBdev3", 00:10:14.630 "uuid": "da5717a2-83f0-4562-921e-9bfff466e42d", 00:10:14.630 "is_configured": true, 00:10:14.630 "data_offset": 0, 00:10:14.630 "data_size": 65536 00:10:14.630 } 00:10:14.630 ] 00:10:14.630 }' 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.630 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.889 [2024-11-29 07:42:04.715445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.889 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.148 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.148 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.148 "name": "Existed_Raid", 00:10:15.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.148 "strip_size_kb": 0, 00:10:15.148 "state": "configuring", 00:10:15.148 "raid_level": "raid1", 00:10:15.148 "superblock": false, 00:10:15.148 "num_base_bdevs": 3, 00:10:15.148 "num_base_bdevs_discovered": 1, 00:10:15.148 "num_base_bdevs_operational": 3, 00:10:15.148 "base_bdevs_list": [ 00:10:15.148 { 00:10:15.148 "name": null, 00:10:15.148 "uuid": "4e16425e-6d26-4c91-b703-131fb460f783", 00:10:15.148 "is_configured": false, 00:10:15.148 "data_offset": 0, 00:10:15.148 "data_size": 65536 00:10:15.148 }, 00:10:15.148 { 00:10:15.148 "name": null, 00:10:15.148 "uuid": "32d9295a-c006-4d95-b2b5-1629aab92b54", 00:10:15.148 "is_configured": false, 00:10:15.148 "data_offset": 0, 00:10:15.148 "data_size": 65536 00:10:15.148 }, 00:10:15.148 { 00:10:15.148 "name": "BaseBdev3", 00:10:15.148 "uuid": "da5717a2-83f0-4562-921e-9bfff466e42d", 00:10:15.148 "is_configured": true, 00:10:15.148 "data_offset": 0, 00:10:15.148 "data_size": 65536 00:10:15.148 } 00:10:15.148 ] 00:10:15.148 }' 00:10:15.148 07:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.148 07:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.406 [2024-11-29 07:42:05.288655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.406 "name": "Existed_Raid", 00:10:15.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.406 "strip_size_kb": 0, 00:10:15.406 "state": "configuring", 00:10:15.406 "raid_level": "raid1", 00:10:15.406 "superblock": false, 00:10:15.406 "num_base_bdevs": 3, 00:10:15.406 "num_base_bdevs_discovered": 2, 00:10:15.406 "num_base_bdevs_operational": 3, 00:10:15.406 "base_bdevs_list": [ 00:10:15.406 { 00:10:15.406 "name": null, 00:10:15.406 "uuid": "4e16425e-6d26-4c91-b703-131fb460f783", 00:10:15.406 "is_configured": false, 00:10:15.406 "data_offset": 0, 00:10:15.406 "data_size": 65536 00:10:15.406 }, 00:10:15.406 { 00:10:15.406 "name": "BaseBdev2", 00:10:15.406 "uuid": "32d9295a-c006-4d95-b2b5-1629aab92b54", 00:10:15.406 "is_configured": true, 00:10:15.406 "data_offset": 0, 00:10:15.406 "data_size": 65536 00:10:15.406 }, 00:10:15.406 { 00:10:15.406 "name": "BaseBdev3", 00:10:15.406 "uuid": "da5717a2-83f0-4562-921e-9bfff466e42d", 00:10:15.406 "is_configured": true, 00:10:15.406 "data_offset": 0, 00:10:15.406 "data_size": 65536 00:10:15.406 } 00:10:15.406 ] 00:10:15.406 }' 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.406 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4e16425e-6d26-4c91-b703-131fb460f783 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.974 [2024-11-29 07:42:05.859194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:15.974 [2024-11-29 07:42:05.859242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:15.974 [2024-11-29 07:42:05.859250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:15.974 [2024-11-29 07:42:05.859511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:15.974 [2024-11-29 07:42:05.859682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:15.974 [2024-11-29 07:42:05.859698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:15.974 [2024-11-29 07:42:05.859928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.974 NewBaseBdev 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.974 [ 00:10:15.974 { 00:10:15.974 "name": "NewBaseBdev", 00:10:15.974 "aliases": [ 00:10:15.974 "4e16425e-6d26-4c91-b703-131fb460f783" 00:10:15.974 ], 00:10:15.974 "product_name": "Malloc disk", 00:10:15.974 "block_size": 512, 00:10:15.974 "num_blocks": 65536, 00:10:15.974 "uuid": "4e16425e-6d26-4c91-b703-131fb460f783", 00:10:15.974 "assigned_rate_limits": { 00:10:15.974 "rw_ios_per_sec": 0, 00:10:15.974 "rw_mbytes_per_sec": 0, 00:10:15.974 "r_mbytes_per_sec": 0, 00:10:15.974 "w_mbytes_per_sec": 0 00:10:15.974 }, 00:10:15.974 "claimed": true, 00:10:15.974 "claim_type": "exclusive_write", 00:10:15.974 "zoned": false, 00:10:15.974 "supported_io_types": { 00:10:15.974 "read": true, 00:10:15.974 "write": true, 00:10:15.974 "unmap": true, 00:10:15.974 "flush": true, 00:10:15.974 "reset": true, 00:10:15.974 "nvme_admin": false, 00:10:15.974 "nvme_io": false, 00:10:15.974 "nvme_io_md": false, 00:10:15.974 "write_zeroes": true, 00:10:15.974 "zcopy": true, 00:10:15.974 "get_zone_info": false, 00:10:15.974 "zone_management": false, 00:10:15.974 "zone_append": false, 00:10:15.974 "compare": false, 00:10:15.974 "compare_and_write": false, 00:10:15.974 "abort": true, 00:10:15.974 "seek_hole": false, 00:10:15.974 "seek_data": false, 00:10:15.974 "copy": true, 00:10:15.974 "nvme_iov_md": false 00:10:15.974 }, 00:10:15.974 "memory_domains": [ 00:10:15.974 { 00:10:15.974 "dma_device_id": "system", 00:10:15.974 "dma_device_type": 1 00:10:15.974 }, 00:10:15.974 { 00:10:15.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.974 "dma_device_type": 2 00:10:15.974 } 00:10:15.974 ], 00:10:15.974 "driver_specific": {} 00:10:15.974 } 00:10:15.974 ] 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:15.974 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.975 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.975 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.975 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.975 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.975 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.975 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.975 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.975 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.975 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.975 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.975 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.975 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.233 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.233 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.233 "name": "Existed_Raid", 00:10:16.233 "uuid": "76fd13ff-fdd6-4469-a682-09c311592465", 00:10:16.233 "strip_size_kb": 0, 00:10:16.233 "state": "online", 00:10:16.233 "raid_level": "raid1", 00:10:16.233 "superblock": false, 00:10:16.233 "num_base_bdevs": 3, 00:10:16.233 "num_base_bdevs_discovered": 3, 00:10:16.233 "num_base_bdevs_operational": 3, 00:10:16.233 "base_bdevs_list": [ 00:10:16.233 { 00:10:16.233 "name": "NewBaseBdev", 00:10:16.234 "uuid": "4e16425e-6d26-4c91-b703-131fb460f783", 00:10:16.234 "is_configured": true, 00:10:16.234 "data_offset": 0, 00:10:16.234 "data_size": 65536 00:10:16.234 }, 00:10:16.234 { 00:10:16.234 "name": "BaseBdev2", 00:10:16.234 "uuid": "32d9295a-c006-4d95-b2b5-1629aab92b54", 00:10:16.234 "is_configured": true, 00:10:16.234 "data_offset": 0, 00:10:16.234 "data_size": 65536 00:10:16.234 }, 00:10:16.234 { 00:10:16.234 "name": "BaseBdev3", 00:10:16.234 "uuid": "da5717a2-83f0-4562-921e-9bfff466e42d", 00:10:16.234 "is_configured": true, 00:10:16.234 "data_offset": 0, 00:10:16.234 "data_size": 65536 00:10:16.234 } 00:10:16.234 ] 00:10:16.234 }' 00:10:16.234 07:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.234 07:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.493 [2024-11-29 07:42:06.278831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.493 "name": "Existed_Raid", 00:10:16.493 "aliases": [ 00:10:16.493 "76fd13ff-fdd6-4469-a682-09c311592465" 00:10:16.493 ], 00:10:16.493 "product_name": "Raid Volume", 00:10:16.493 "block_size": 512, 00:10:16.493 "num_blocks": 65536, 00:10:16.493 "uuid": "76fd13ff-fdd6-4469-a682-09c311592465", 00:10:16.493 "assigned_rate_limits": { 00:10:16.493 "rw_ios_per_sec": 0, 00:10:16.493 "rw_mbytes_per_sec": 0, 00:10:16.493 "r_mbytes_per_sec": 0, 00:10:16.493 "w_mbytes_per_sec": 0 00:10:16.493 }, 00:10:16.493 "claimed": false, 00:10:16.493 "zoned": false, 00:10:16.493 "supported_io_types": { 00:10:16.493 "read": true, 00:10:16.493 "write": true, 00:10:16.493 "unmap": false, 00:10:16.493 "flush": false, 00:10:16.493 "reset": true, 00:10:16.493 "nvme_admin": false, 00:10:16.493 "nvme_io": false, 00:10:16.493 "nvme_io_md": false, 00:10:16.493 "write_zeroes": true, 00:10:16.493 "zcopy": false, 00:10:16.493 "get_zone_info": false, 00:10:16.493 "zone_management": false, 00:10:16.493 "zone_append": false, 00:10:16.493 "compare": false, 00:10:16.493 "compare_and_write": false, 00:10:16.493 "abort": false, 00:10:16.493 "seek_hole": false, 00:10:16.493 "seek_data": false, 00:10:16.493 "copy": false, 00:10:16.493 "nvme_iov_md": false 00:10:16.493 }, 00:10:16.493 "memory_domains": [ 00:10:16.493 { 00:10:16.493 "dma_device_id": "system", 00:10:16.493 "dma_device_type": 1 00:10:16.493 }, 00:10:16.493 { 00:10:16.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.493 "dma_device_type": 2 00:10:16.493 }, 00:10:16.493 { 00:10:16.493 "dma_device_id": "system", 00:10:16.493 "dma_device_type": 1 00:10:16.493 }, 00:10:16.493 { 00:10:16.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.493 "dma_device_type": 2 00:10:16.493 }, 00:10:16.493 { 00:10:16.493 "dma_device_id": "system", 00:10:16.493 "dma_device_type": 1 00:10:16.493 }, 00:10:16.493 { 00:10:16.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.493 "dma_device_type": 2 00:10:16.493 } 00:10:16.493 ], 00:10:16.493 "driver_specific": { 00:10:16.493 "raid": { 00:10:16.493 "uuid": "76fd13ff-fdd6-4469-a682-09c311592465", 00:10:16.493 "strip_size_kb": 0, 00:10:16.493 "state": "online", 00:10:16.493 "raid_level": "raid1", 00:10:16.493 "superblock": false, 00:10:16.493 "num_base_bdevs": 3, 00:10:16.493 "num_base_bdevs_discovered": 3, 00:10:16.493 "num_base_bdevs_operational": 3, 00:10:16.493 "base_bdevs_list": [ 00:10:16.493 { 00:10:16.493 "name": "NewBaseBdev", 00:10:16.493 "uuid": "4e16425e-6d26-4c91-b703-131fb460f783", 00:10:16.493 "is_configured": true, 00:10:16.493 "data_offset": 0, 00:10:16.493 "data_size": 65536 00:10:16.493 }, 00:10:16.493 { 00:10:16.493 "name": "BaseBdev2", 00:10:16.493 "uuid": "32d9295a-c006-4d95-b2b5-1629aab92b54", 00:10:16.493 "is_configured": true, 00:10:16.493 "data_offset": 0, 00:10:16.493 "data_size": 65536 00:10:16.493 }, 00:10:16.493 { 00:10:16.493 "name": "BaseBdev3", 00:10:16.493 "uuid": "da5717a2-83f0-4562-921e-9bfff466e42d", 00:10:16.493 "is_configured": true, 00:10:16.493 "data_offset": 0, 00:10:16.493 "data_size": 65536 00:10:16.493 } 00:10:16.493 ] 00:10:16.493 } 00:10:16.493 } 00:10:16.493 }' 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:16.493 BaseBdev2 00:10:16.493 BaseBdev3' 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.493 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.494 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.494 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.494 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.494 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.494 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.494 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.494 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.494 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.494 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 [2024-11-29 07:42:06.526134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.753 [2024-11-29 07:42:06.526171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.753 [2024-11-29 07:42:06.526264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.753 [2024-11-29 07:42:06.526570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.753 [2024-11-29 07:42:06.526589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67191 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67191 ']' 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67191 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67191 00:10:16.753 killing process with pid 67191 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67191' 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67191 00:10:16.753 07:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67191 00:10:16.753 [2024-11-29 07:42:06.572956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.012 [2024-11-29 07:42:06.862357] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:18.391 ************************************ 00:10:18.391 END TEST raid_state_function_test 00:10:18.391 ************************************ 00:10:18.391 07:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:18.391 00:10:18.391 real 0m10.239s 00:10:18.391 user 0m16.259s 00:10:18.391 sys 0m1.744s 00:10:18.391 07:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.391 07:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.391 07:42:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:18.391 07:42:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:18.391 07:42:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.391 07:42:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.391 ************************************ 00:10:18.391 START TEST raid_state_function_test_sb 00:10:18.391 ************************************ 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67812 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67812' 00:10:18.392 Process raid pid: 67812 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67812 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67812 ']' 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.392 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.392 [2024-11-29 07:42:08.117491] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:18.392 [2024-11-29 07:42:08.117606] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.392 [2024-11-29 07:42:08.293424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.651 [2024-11-29 07:42:08.407108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.911 [2024-11-29 07:42:08.600580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.911 [2024-11-29 07:42:08.600619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.171 [2024-11-29 07:42:08.946650] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:19.171 [2024-11-29 07:42:08.946705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:19.171 [2024-11-29 07:42:08.946721] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.171 [2024-11-29 07:42:08.946731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:19.171 [2024-11-29 07:42:08.946737] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:19.171 [2024-11-29 07:42:08.946745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.171 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.171 "name": "Existed_Raid", 00:10:19.171 "uuid": "48bfe2f8-a809-4ae4-ad47-088e147bf510", 00:10:19.171 "strip_size_kb": 0, 00:10:19.171 "state": "configuring", 00:10:19.171 "raid_level": "raid1", 00:10:19.171 "superblock": true, 00:10:19.171 "num_base_bdevs": 3, 00:10:19.171 "num_base_bdevs_discovered": 0, 00:10:19.171 "num_base_bdevs_operational": 3, 00:10:19.171 "base_bdevs_list": [ 00:10:19.171 { 00:10:19.171 "name": "BaseBdev1", 00:10:19.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.171 "is_configured": false, 00:10:19.171 "data_offset": 0, 00:10:19.171 "data_size": 0 00:10:19.171 }, 00:10:19.171 { 00:10:19.171 "name": "BaseBdev2", 00:10:19.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.171 "is_configured": false, 00:10:19.171 "data_offset": 0, 00:10:19.171 "data_size": 0 00:10:19.171 }, 00:10:19.171 { 00:10:19.171 "name": "BaseBdev3", 00:10:19.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.171 "is_configured": false, 00:10:19.171 "data_offset": 0, 00:10:19.171 "data_size": 0 00:10:19.171 } 00:10:19.171 ] 00:10:19.172 }' 00:10:19.172 07:42:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.172 07:42:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.432 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.432 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.432 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.432 [2024-11-29 07:42:09.357905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.432 [2024-11-29 07:42:09.357944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:19.432 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.432 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:19.432 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.432 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.432 [2024-11-29 07:42:09.365894] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:19.432 [2024-11-29 07:42:09.365939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:19.432 [2024-11-29 07:42:09.365948] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.432 [2024-11-29 07:42:09.365974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:19.432 [2024-11-29 07:42:09.365980] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:19.432 [2024-11-29 07:42:09.365988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:19.432 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.432 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:19.432 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.432 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.691 [2024-11-29 07:42:09.408617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.691 BaseBdev1 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.691 [ 00:10:19.691 { 00:10:19.691 "name": "BaseBdev1", 00:10:19.691 "aliases": [ 00:10:19.691 "7ac4efa9-9fa2-4d2e-b983-7502acb11f39" 00:10:19.691 ], 00:10:19.691 "product_name": "Malloc disk", 00:10:19.691 "block_size": 512, 00:10:19.691 "num_blocks": 65536, 00:10:19.691 "uuid": "7ac4efa9-9fa2-4d2e-b983-7502acb11f39", 00:10:19.691 "assigned_rate_limits": { 00:10:19.691 "rw_ios_per_sec": 0, 00:10:19.691 "rw_mbytes_per_sec": 0, 00:10:19.691 "r_mbytes_per_sec": 0, 00:10:19.691 "w_mbytes_per_sec": 0 00:10:19.691 }, 00:10:19.691 "claimed": true, 00:10:19.691 "claim_type": "exclusive_write", 00:10:19.691 "zoned": false, 00:10:19.691 "supported_io_types": { 00:10:19.691 "read": true, 00:10:19.691 "write": true, 00:10:19.691 "unmap": true, 00:10:19.691 "flush": true, 00:10:19.691 "reset": true, 00:10:19.691 "nvme_admin": false, 00:10:19.691 "nvme_io": false, 00:10:19.691 "nvme_io_md": false, 00:10:19.691 "write_zeroes": true, 00:10:19.691 "zcopy": true, 00:10:19.691 "get_zone_info": false, 00:10:19.691 "zone_management": false, 00:10:19.691 "zone_append": false, 00:10:19.691 "compare": false, 00:10:19.691 "compare_and_write": false, 00:10:19.691 "abort": true, 00:10:19.691 "seek_hole": false, 00:10:19.691 "seek_data": false, 00:10:19.691 "copy": true, 00:10:19.691 "nvme_iov_md": false 00:10:19.691 }, 00:10:19.691 "memory_domains": [ 00:10:19.691 { 00:10:19.691 "dma_device_id": "system", 00:10:19.691 "dma_device_type": 1 00:10:19.691 }, 00:10:19.691 { 00:10:19.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.691 "dma_device_type": 2 00:10:19.691 } 00:10:19.691 ], 00:10:19.691 "driver_specific": {} 00:10:19.691 } 00:10:19.691 ] 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.691 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.692 "name": "Existed_Raid", 00:10:19.692 "uuid": "e4730336-a3b7-4379-bd2f-a34e913e1660", 00:10:19.692 "strip_size_kb": 0, 00:10:19.692 "state": "configuring", 00:10:19.692 "raid_level": "raid1", 00:10:19.692 "superblock": true, 00:10:19.692 "num_base_bdevs": 3, 00:10:19.692 "num_base_bdevs_discovered": 1, 00:10:19.692 "num_base_bdevs_operational": 3, 00:10:19.692 "base_bdevs_list": [ 00:10:19.692 { 00:10:19.692 "name": "BaseBdev1", 00:10:19.692 "uuid": "7ac4efa9-9fa2-4d2e-b983-7502acb11f39", 00:10:19.692 "is_configured": true, 00:10:19.692 "data_offset": 2048, 00:10:19.692 "data_size": 63488 00:10:19.692 }, 00:10:19.692 { 00:10:19.692 "name": "BaseBdev2", 00:10:19.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.692 "is_configured": false, 00:10:19.692 "data_offset": 0, 00:10:19.692 "data_size": 0 00:10:19.692 }, 00:10:19.692 { 00:10:19.692 "name": "BaseBdev3", 00:10:19.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.692 "is_configured": false, 00:10:19.692 "data_offset": 0, 00:10:19.692 "data_size": 0 00:10:19.692 } 00:10:19.692 ] 00:10:19.692 }' 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.692 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.951 [2024-11-29 07:42:09.879886] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.951 [2024-11-29 07:42:09.879942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.951 [2024-11-29 07:42:09.887935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.951 [2024-11-29 07:42:09.889788] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.951 [2024-11-29 07:42:09.889832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:19.951 [2024-11-29 07:42:09.889842] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:19.951 [2024-11-29 07:42:09.889851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.951 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.210 "name": "Existed_Raid", 00:10:20.210 "uuid": "ca8bfa4b-dcaa-4d10-8a9c-14389af7e2de", 00:10:20.210 "strip_size_kb": 0, 00:10:20.210 "state": "configuring", 00:10:20.210 "raid_level": "raid1", 00:10:20.210 "superblock": true, 00:10:20.210 "num_base_bdevs": 3, 00:10:20.210 "num_base_bdevs_discovered": 1, 00:10:20.210 "num_base_bdevs_operational": 3, 00:10:20.210 "base_bdevs_list": [ 00:10:20.210 { 00:10:20.210 "name": "BaseBdev1", 00:10:20.210 "uuid": "7ac4efa9-9fa2-4d2e-b983-7502acb11f39", 00:10:20.210 "is_configured": true, 00:10:20.210 "data_offset": 2048, 00:10:20.210 "data_size": 63488 00:10:20.210 }, 00:10:20.210 { 00:10:20.210 "name": "BaseBdev2", 00:10:20.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.210 "is_configured": false, 00:10:20.210 "data_offset": 0, 00:10:20.210 "data_size": 0 00:10:20.210 }, 00:10:20.210 { 00:10:20.210 "name": "BaseBdev3", 00:10:20.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.210 "is_configured": false, 00:10:20.210 "data_offset": 0, 00:10:20.210 "data_size": 0 00:10:20.210 } 00:10:20.210 ] 00:10:20.210 }' 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.210 07:42:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.469 [2024-11-29 07:42:10.328776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.469 BaseBdev2 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.469 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.469 [ 00:10:20.469 { 00:10:20.469 "name": "BaseBdev2", 00:10:20.469 "aliases": [ 00:10:20.469 "feca7627-3a41-4d83-a4ac-4aec29585b90" 00:10:20.469 ], 00:10:20.469 "product_name": "Malloc disk", 00:10:20.469 "block_size": 512, 00:10:20.469 "num_blocks": 65536, 00:10:20.469 "uuid": "feca7627-3a41-4d83-a4ac-4aec29585b90", 00:10:20.469 "assigned_rate_limits": { 00:10:20.469 "rw_ios_per_sec": 0, 00:10:20.469 "rw_mbytes_per_sec": 0, 00:10:20.469 "r_mbytes_per_sec": 0, 00:10:20.469 "w_mbytes_per_sec": 0 00:10:20.469 }, 00:10:20.469 "claimed": true, 00:10:20.469 "claim_type": "exclusive_write", 00:10:20.469 "zoned": false, 00:10:20.469 "supported_io_types": { 00:10:20.470 "read": true, 00:10:20.470 "write": true, 00:10:20.470 "unmap": true, 00:10:20.470 "flush": true, 00:10:20.470 "reset": true, 00:10:20.470 "nvme_admin": false, 00:10:20.470 "nvme_io": false, 00:10:20.470 "nvme_io_md": false, 00:10:20.470 "write_zeroes": true, 00:10:20.470 "zcopy": true, 00:10:20.470 "get_zone_info": false, 00:10:20.470 "zone_management": false, 00:10:20.470 "zone_append": false, 00:10:20.470 "compare": false, 00:10:20.470 "compare_and_write": false, 00:10:20.470 "abort": true, 00:10:20.470 "seek_hole": false, 00:10:20.470 "seek_data": false, 00:10:20.470 "copy": true, 00:10:20.470 "nvme_iov_md": false 00:10:20.470 }, 00:10:20.470 "memory_domains": [ 00:10:20.470 { 00:10:20.470 "dma_device_id": "system", 00:10:20.470 "dma_device_type": 1 00:10:20.470 }, 00:10:20.470 { 00:10:20.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.470 "dma_device_type": 2 00:10:20.470 } 00:10:20.470 ], 00:10:20.470 "driver_specific": {} 00:10:20.470 } 00:10:20.470 ] 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.470 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.739 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.739 "name": "Existed_Raid", 00:10:20.739 "uuid": "ca8bfa4b-dcaa-4d10-8a9c-14389af7e2de", 00:10:20.739 "strip_size_kb": 0, 00:10:20.739 "state": "configuring", 00:10:20.739 "raid_level": "raid1", 00:10:20.739 "superblock": true, 00:10:20.739 "num_base_bdevs": 3, 00:10:20.739 "num_base_bdevs_discovered": 2, 00:10:20.739 "num_base_bdevs_operational": 3, 00:10:20.739 "base_bdevs_list": [ 00:10:20.739 { 00:10:20.739 "name": "BaseBdev1", 00:10:20.739 "uuid": "7ac4efa9-9fa2-4d2e-b983-7502acb11f39", 00:10:20.739 "is_configured": true, 00:10:20.739 "data_offset": 2048, 00:10:20.739 "data_size": 63488 00:10:20.739 }, 00:10:20.739 { 00:10:20.739 "name": "BaseBdev2", 00:10:20.739 "uuid": "feca7627-3a41-4d83-a4ac-4aec29585b90", 00:10:20.739 "is_configured": true, 00:10:20.739 "data_offset": 2048, 00:10:20.739 "data_size": 63488 00:10:20.739 }, 00:10:20.739 { 00:10:20.739 "name": "BaseBdev3", 00:10:20.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.739 "is_configured": false, 00:10:20.739 "data_offset": 0, 00:10:20.739 "data_size": 0 00:10:20.739 } 00:10:20.739 ] 00:10:20.739 }' 00:10:20.739 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.739 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.999 [2024-11-29 07:42:10.827198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.999 [2024-11-29 07:42:10.827616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:20.999 [2024-11-29 07:42:10.827646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.999 [2024-11-29 07:42:10.827951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:20.999 [2024-11-29 07:42:10.828160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:20.999 [2024-11-29 07:42:10.828180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:20.999 BaseBdev3 00:10:20.999 [2024-11-29 07:42:10.828356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.999 [ 00:10:20.999 { 00:10:20.999 "name": "BaseBdev3", 00:10:20.999 "aliases": [ 00:10:20.999 "d033d484-6e76-465b-a59f-09ea59d58dd8" 00:10:20.999 ], 00:10:20.999 "product_name": "Malloc disk", 00:10:20.999 "block_size": 512, 00:10:20.999 "num_blocks": 65536, 00:10:20.999 "uuid": "d033d484-6e76-465b-a59f-09ea59d58dd8", 00:10:20.999 "assigned_rate_limits": { 00:10:20.999 "rw_ios_per_sec": 0, 00:10:20.999 "rw_mbytes_per_sec": 0, 00:10:20.999 "r_mbytes_per_sec": 0, 00:10:20.999 "w_mbytes_per_sec": 0 00:10:20.999 }, 00:10:20.999 "claimed": true, 00:10:20.999 "claim_type": "exclusive_write", 00:10:20.999 "zoned": false, 00:10:20.999 "supported_io_types": { 00:10:20.999 "read": true, 00:10:20.999 "write": true, 00:10:20.999 "unmap": true, 00:10:20.999 "flush": true, 00:10:20.999 "reset": true, 00:10:20.999 "nvme_admin": false, 00:10:20.999 "nvme_io": false, 00:10:20.999 "nvme_io_md": false, 00:10:20.999 "write_zeroes": true, 00:10:20.999 "zcopy": true, 00:10:20.999 "get_zone_info": false, 00:10:20.999 "zone_management": false, 00:10:20.999 "zone_append": false, 00:10:20.999 "compare": false, 00:10:20.999 "compare_and_write": false, 00:10:20.999 "abort": true, 00:10:20.999 "seek_hole": false, 00:10:20.999 "seek_data": false, 00:10:20.999 "copy": true, 00:10:20.999 "nvme_iov_md": false 00:10:20.999 }, 00:10:20.999 "memory_domains": [ 00:10:20.999 { 00:10:20.999 "dma_device_id": "system", 00:10:20.999 "dma_device_type": 1 00:10:20.999 }, 00:10:20.999 { 00:10:20.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.999 "dma_device_type": 2 00:10:20.999 } 00:10:20.999 ], 00:10:20.999 "driver_specific": {} 00:10:20.999 } 00:10:20.999 ] 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.999 "name": "Existed_Raid", 00:10:20.999 "uuid": "ca8bfa4b-dcaa-4d10-8a9c-14389af7e2de", 00:10:20.999 "strip_size_kb": 0, 00:10:20.999 "state": "online", 00:10:20.999 "raid_level": "raid1", 00:10:20.999 "superblock": true, 00:10:20.999 "num_base_bdevs": 3, 00:10:20.999 "num_base_bdevs_discovered": 3, 00:10:20.999 "num_base_bdevs_operational": 3, 00:10:20.999 "base_bdevs_list": [ 00:10:20.999 { 00:10:20.999 "name": "BaseBdev1", 00:10:20.999 "uuid": "7ac4efa9-9fa2-4d2e-b983-7502acb11f39", 00:10:20.999 "is_configured": true, 00:10:20.999 "data_offset": 2048, 00:10:20.999 "data_size": 63488 00:10:20.999 }, 00:10:20.999 { 00:10:20.999 "name": "BaseBdev2", 00:10:20.999 "uuid": "feca7627-3a41-4d83-a4ac-4aec29585b90", 00:10:20.999 "is_configured": true, 00:10:20.999 "data_offset": 2048, 00:10:20.999 "data_size": 63488 00:10:20.999 }, 00:10:20.999 { 00:10:20.999 "name": "BaseBdev3", 00:10:20.999 "uuid": "d033d484-6e76-465b-a59f-09ea59d58dd8", 00:10:20.999 "is_configured": true, 00:10:20.999 "data_offset": 2048, 00:10:20.999 "data_size": 63488 00:10:20.999 } 00:10:20.999 ] 00:10:20.999 }' 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.999 07:42:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.600 [2024-11-29 07:42:11.290732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.600 "name": "Existed_Raid", 00:10:21.600 "aliases": [ 00:10:21.600 "ca8bfa4b-dcaa-4d10-8a9c-14389af7e2de" 00:10:21.600 ], 00:10:21.600 "product_name": "Raid Volume", 00:10:21.600 "block_size": 512, 00:10:21.600 "num_blocks": 63488, 00:10:21.600 "uuid": "ca8bfa4b-dcaa-4d10-8a9c-14389af7e2de", 00:10:21.600 "assigned_rate_limits": { 00:10:21.600 "rw_ios_per_sec": 0, 00:10:21.600 "rw_mbytes_per_sec": 0, 00:10:21.600 "r_mbytes_per_sec": 0, 00:10:21.600 "w_mbytes_per_sec": 0 00:10:21.600 }, 00:10:21.600 "claimed": false, 00:10:21.600 "zoned": false, 00:10:21.600 "supported_io_types": { 00:10:21.600 "read": true, 00:10:21.600 "write": true, 00:10:21.600 "unmap": false, 00:10:21.600 "flush": false, 00:10:21.600 "reset": true, 00:10:21.600 "nvme_admin": false, 00:10:21.600 "nvme_io": false, 00:10:21.600 "nvme_io_md": false, 00:10:21.600 "write_zeroes": true, 00:10:21.600 "zcopy": false, 00:10:21.600 "get_zone_info": false, 00:10:21.600 "zone_management": false, 00:10:21.600 "zone_append": false, 00:10:21.600 "compare": false, 00:10:21.600 "compare_and_write": false, 00:10:21.600 "abort": false, 00:10:21.600 "seek_hole": false, 00:10:21.600 "seek_data": false, 00:10:21.600 "copy": false, 00:10:21.600 "nvme_iov_md": false 00:10:21.600 }, 00:10:21.600 "memory_domains": [ 00:10:21.600 { 00:10:21.600 "dma_device_id": "system", 00:10:21.600 "dma_device_type": 1 00:10:21.600 }, 00:10:21.600 { 00:10:21.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.600 "dma_device_type": 2 00:10:21.600 }, 00:10:21.600 { 00:10:21.600 "dma_device_id": "system", 00:10:21.600 "dma_device_type": 1 00:10:21.600 }, 00:10:21.600 { 00:10:21.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.600 "dma_device_type": 2 00:10:21.600 }, 00:10:21.600 { 00:10:21.600 "dma_device_id": "system", 00:10:21.600 "dma_device_type": 1 00:10:21.600 }, 00:10:21.600 { 00:10:21.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.600 "dma_device_type": 2 00:10:21.600 } 00:10:21.600 ], 00:10:21.600 "driver_specific": { 00:10:21.600 "raid": { 00:10:21.600 "uuid": "ca8bfa4b-dcaa-4d10-8a9c-14389af7e2de", 00:10:21.600 "strip_size_kb": 0, 00:10:21.600 "state": "online", 00:10:21.600 "raid_level": "raid1", 00:10:21.600 "superblock": true, 00:10:21.600 "num_base_bdevs": 3, 00:10:21.600 "num_base_bdevs_discovered": 3, 00:10:21.600 "num_base_bdevs_operational": 3, 00:10:21.600 "base_bdevs_list": [ 00:10:21.600 { 00:10:21.600 "name": "BaseBdev1", 00:10:21.600 "uuid": "7ac4efa9-9fa2-4d2e-b983-7502acb11f39", 00:10:21.600 "is_configured": true, 00:10:21.600 "data_offset": 2048, 00:10:21.600 "data_size": 63488 00:10:21.600 }, 00:10:21.600 { 00:10:21.600 "name": "BaseBdev2", 00:10:21.600 "uuid": "feca7627-3a41-4d83-a4ac-4aec29585b90", 00:10:21.600 "is_configured": true, 00:10:21.600 "data_offset": 2048, 00:10:21.600 "data_size": 63488 00:10:21.600 }, 00:10:21.600 { 00:10:21.600 "name": "BaseBdev3", 00:10:21.600 "uuid": "d033d484-6e76-465b-a59f-09ea59d58dd8", 00:10:21.600 "is_configured": true, 00:10:21.600 "data_offset": 2048, 00:10:21.600 "data_size": 63488 00:10:21.600 } 00:10:21.600 ] 00:10:21.600 } 00:10:21.600 } 00:10:21.600 }' 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:21.600 BaseBdev2 00:10:21.600 BaseBdev3' 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.600 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.601 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.860 [2024-11-29 07:42:11.550017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.860 "name": "Existed_Raid", 00:10:21.860 "uuid": "ca8bfa4b-dcaa-4d10-8a9c-14389af7e2de", 00:10:21.860 "strip_size_kb": 0, 00:10:21.860 "state": "online", 00:10:21.860 "raid_level": "raid1", 00:10:21.860 "superblock": true, 00:10:21.860 "num_base_bdevs": 3, 00:10:21.860 "num_base_bdevs_discovered": 2, 00:10:21.860 "num_base_bdevs_operational": 2, 00:10:21.860 "base_bdevs_list": [ 00:10:21.860 { 00:10:21.860 "name": null, 00:10:21.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.860 "is_configured": false, 00:10:21.860 "data_offset": 0, 00:10:21.860 "data_size": 63488 00:10:21.860 }, 00:10:21.860 { 00:10:21.860 "name": "BaseBdev2", 00:10:21.860 "uuid": "feca7627-3a41-4d83-a4ac-4aec29585b90", 00:10:21.860 "is_configured": true, 00:10:21.860 "data_offset": 2048, 00:10:21.860 "data_size": 63488 00:10:21.860 }, 00:10:21.860 { 00:10:21.860 "name": "BaseBdev3", 00:10:21.860 "uuid": "d033d484-6e76-465b-a59f-09ea59d58dd8", 00:10:21.860 "is_configured": true, 00:10:21.860 "data_offset": 2048, 00:10:21.860 "data_size": 63488 00:10:21.860 } 00:10:21.860 ] 00:10:21.860 }' 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.860 07:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.120 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:22.120 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:22.120 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.120 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:22.120 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.120 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.380 [2024-11-29 07:42:12.108678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.380 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.380 [2024-11-29 07:42:12.260255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:22.380 [2024-11-29 07:42:12.260362] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.640 [2024-11-29 07:42:12.352334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.640 [2024-11-29 07:42:12.352397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.640 [2024-11-29 07:42:12.352409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.640 BaseBdev2 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.640 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.641 [ 00:10:22.641 { 00:10:22.641 "name": "BaseBdev2", 00:10:22.641 "aliases": [ 00:10:22.641 "f032afeb-4dfe-4374-8fb3-405fb7f3512a" 00:10:22.641 ], 00:10:22.641 "product_name": "Malloc disk", 00:10:22.641 "block_size": 512, 00:10:22.641 "num_blocks": 65536, 00:10:22.641 "uuid": "f032afeb-4dfe-4374-8fb3-405fb7f3512a", 00:10:22.641 "assigned_rate_limits": { 00:10:22.641 "rw_ios_per_sec": 0, 00:10:22.641 "rw_mbytes_per_sec": 0, 00:10:22.641 "r_mbytes_per_sec": 0, 00:10:22.641 "w_mbytes_per_sec": 0 00:10:22.641 }, 00:10:22.641 "claimed": false, 00:10:22.641 "zoned": false, 00:10:22.641 "supported_io_types": { 00:10:22.641 "read": true, 00:10:22.641 "write": true, 00:10:22.641 "unmap": true, 00:10:22.641 "flush": true, 00:10:22.641 "reset": true, 00:10:22.641 "nvme_admin": false, 00:10:22.641 "nvme_io": false, 00:10:22.641 "nvme_io_md": false, 00:10:22.641 "write_zeroes": true, 00:10:22.641 "zcopy": true, 00:10:22.641 "get_zone_info": false, 00:10:22.641 "zone_management": false, 00:10:22.641 "zone_append": false, 00:10:22.641 "compare": false, 00:10:22.641 "compare_and_write": false, 00:10:22.641 "abort": true, 00:10:22.641 "seek_hole": false, 00:10:22.641 "seek_data": false, 00:10:22.641 "copy": true, 00:10:22.641 "nvme_iov_md": false 00:10:22.641 }, 00:10:22.641 "memory_domains": [ 00:10:22.641 { 00:10:22.641 "dma_device_id": "system", 00:10:22.641 "dma_device_type": 1 00:10:22.641 }, 00:10:22.641 { 00:10:22.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.641 "dma_device_type": 2 00:10:22.641 } 00:10:22.641 ], 00:10:22.641 "driver_specific": {} 00:10:22.641 } 00:10:22.641 ] 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.641 BaseBdev3 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.641 [ 00:10:22.641 { 00:10:22.641 "name": "BaseBdev3", 00:10:22.641 "aliases": [ 00:10:22.641 "3388f2ba-d94c-42c4-94d4-fa13c2bdc44d" 00:10:22.641 ], 00:10:22.641 "product_name": "Malloc disk", 00:10:22.641 "block_size": 512, 00:10:22.641 "num_blocks": 65536, 00:10:22.641 "uuid": "3388f2ba-d94c-42c4-94d4-fa13c2bdc44d", 00:10:22.641 "assigned_rate_limits": { 00:10:22.641 "rw_ios_per_sec": 0, 00:10:22.641 "rw_mbytes_per_sec": 0, 00:10:22.641 "r_mbytes_per_sec": 0, 00:10:22.641 "w_mbytes_per_sec": 0 00:10:22.641 }, 00:10:22.641 "claimed": false, 00:10:22.641 "zoned": false, 00:10:22.641 "supported_io_types": { 00:10:22.641 "read": true, 00:10:22.641 "write": true, 00:10:22.641 "unmap": true, 00:10:22.641 "flush": true, 00:10:22.641 "reset": true, 00:10:22.641 "nvme_admin": false, 00:10:22.641 "nvme_io": false, 00:10:22.641 "nvme_io_md": false, 00:10:22.641 "write_zeroes": true, 00:10:22.641 "zcopy": true, 00:10:22.641 "get_zone_info": false, 00:10:22.641 "zone_management": false, 00:10:22.641 "zone_append": false, 00:10:22.641 "compare": false, 00:10:22.641 "compare_and_write": false, 00:10:22.641 "abort": true, 00:10:22.641 "seek_hole": false, 00:10:22.641 "seek_data": false, 00:10:22.641 "copy": true, 00:10:22.641 "nvme_iov_md": false 00:10:22.641 }, 00:10:22.641 "memory_domains": [ 00:10:22.641 { 00:10:22.641 "dma_device_id": "system", 00:10:22.641 "dma_device_type": 1 00:10:22.641 }, 00:10:22.641 { 00:10:22.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.641 "dma_device_type": 2 00:10:22.641 } 00:10:22.641 ], 00:10:22.641 "driver_specific": {} 00:10:22.641 } 00:10:22.641 ] 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.641 [2024-11-29 07:42:12.559195] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.641 [2024-11-29 07:42:12.559238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.641 [2024-11-29 07:42:12.559257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.641 [2024-11-29 07:42:12.561069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.641 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.901 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.901 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.901 "name": "Existed_Raid", 00:10:22.901 "uuid": "b025f2c0-958b-4756-962e-049c85714e46", 00:10:22.901 "strip_size_kb": 0, 00:10:22.901 "state": "configuring", 00:10:22.901 "raid_level": "raid1", 00:10:22.901 "superblock": true, 00:10:22.901 "num_base_bdevs": 3, 00:10:22.901 "num_base_bdevs_discovered": 2, 00:10:22.901 "num_base_bdevs_operational": 3, 00:10:22.901 "base_bdevs_list": [ 00:10:22.901 { 00:10:22.901 "name": "BaseBdev1", 00:10:22.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.901 "is_configured": false, 00:10:22.901 "data_offset": 0, 00:10:22.901 "data_size": 0 00:10:22.901 }, 00:10:22.901 { 00:10:22.901 "name": "BaseBdev2", 00:10:22.901 "uuid": "f032afeb-4dfe-4374-8fb3-405fb7f3512a", 00:10:22.901 "is_configured": true, 00:10:22.901 "data_offset": 2048, 00:10:22.901 "data_size": 63488 00:10:22.901 }, 00:10:22.901 { 00:10:22.901 "name": "BaseBdev3", 00:10:22.901 "uuid": "3388f2ba-d94c-42c4-94d4-fa13c2bdc44d", 00:10:22.901 "is_configured": true, 00:10:22.901 "data_offset": 2048, 00:10:22.901 "data_size": 63488 00:10:22.901 } 00:10:22.901 ] 00:10:22.901 }' 00:10:22.901 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.901 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.161 [2024-11-29 07:42:12.962524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.161 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.162 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.162 07:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.162 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.162 "name": "Existed_Raid", 00:10:23.162 "uuid": "b025f2c0-958b-4756-962e-049c85714e46", 00:10:23.162 "strip_size_kb": 0, 00:10:23.162 "state": "configuring", 00:10:23.162 "raid_level": "raid1", 00:10:23.162 "superblock": true, 00:10:23.162 "num_base_bdevs": 3, 00:10:23.162 "num_base_bdevs_discovered": 1, 00:10:23.162 "num_base_bdevs_operational": 3, 00:10:23.162 "base_bdevs_list": [ 00:10:23.162 { 00:10:23.162 "name": "BaseBdev1", 00:10:23.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.162 "is_configured": false, 00:10:23.162 "data_offset": 0, 00:10:23.162 "data_size": 0 00:10:23.162 }, 00:10:23.162 { 00:10:23.162 "name": null, 00:10:23.162 "uuid": "f032afeb-4dfe-4374-8fb3-405fb7f3512a", 00:10:23.162 "is_configured": false, 00:10:23.162 "data_offset": 0, 00:10:23.162 "data_size": 63488 00:10:23.162 }, 00:10:23.162 { 00:10:23.162 "name": "BaseBdev3", 00:10:23.162 "uuid": "3388f2ba-d94c-42c4-94d4-fa13c2bdc44d", 00:10:23.162 "is_configured": true, 00:10:23.162 "data_offset": 2048, 00:10:23.162 "data_size": 63488 00:10:23.162 } 00:10:23.162 ] 00:10:23.162 }' 00:10:23.162 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.162 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.421 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.421 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.421 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.421 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:23.679 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.680 [2024-11-29 07:42:13.434461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.680 BaseBdev1 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.680 [ 00:10:23.680 { 00:10:23.680 "name": "BaseBdev1", 00:10:23.680 "aliases": [ 00:10:23.680 "2456651a-a0ec-4e77-a185-f30a709c9ace" 00:10:23.680 ], 00:10:23.680 "product_name": "Malloc disk", 00:10:23.680 "block_size": 512, 00:10:23.680 "num_blocks": 65536, 00:10:23.680 "uuid": "2456651a-a0ec-4e77-a185-f30a709c9ace", 00:10:23.680 "assigned_rate_limits": { 00:10:23.680 "rw_ios_per_sec": 0, 00:10:23.680 "rw_mbytes_per_sec": 0, 00:10:23.680 "r_mbytes_per_sec": 0, 00:10:23.680 "w_mbytes_per_sec": 0 00:10:23.680 }, 00:10:23.680 "claimed": true, 00:10:23.680 "claim_type": "exclusive_write", 00:10:23.680 "zoned": false, 00:10:23.680 "supported_io_types": { 00:10:23.680 "read": true, 00:10:23.680 "write": true, 00:10:23.680 "unmap": true, 00:10:23.680 "flush": true, 00:10:23.680 "reset": true, 00:10:23.680 "nvme_admin": false, 00:10:23.680 "nvme_io": false, 00:10:23.680 "nvme_io_md": false, 00:10:23.680 "write_zeroes": true, 00:10:23.680 "zcopy": true, 00:10:23.680 "get_zone_info": false, 00:10:23.680 "zone_management": false, 00:10:23.680 "zone_append": false, 00:10:23.680 "compare": false, 00:10:23.680 "compare_and_write": false, 00:10:23.680 "abort": true, 00:10:23.680 "seek_hole": false, 00:10:23.680 "seek_data": false, 00:10:23.680 "copy": true, 00:10:23.680 "nvme_iov_md": false 00:10:23.680 }, 00:10:23.680 "memory_domains": [ 00:10:23.680 { 00:10:23.680 "dma_device_id": "system", 00:10:23.680 "dma_device_type": 1 00:10:23.680 }, 00:10:23.680 { 00:10:23.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.680 "dma_device_type": 2 00:10:23.680 } 00:10:23.680 ], 00:10:23.680 "driver_specific": {} 00:10:23.680 } 00:10:23.680 ] 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.680 "name": "Existed_Raid", 00:10:23.680 "uuid": "b025f2c0-958b-4756-962e-049c85714e46", 00:10:23.680 "strip_size_kb": 0, 00:10:23.680 "state": "configuring", 00:10:23.680 "raid_level": "raid1", 00:10:23.680 "superblock": true, 00:10:23.680 "num_base_bdevs": 3, 00:10:23.680 "num_base_bdevs_discovered": 2, 00:10:23.680 "num_base_bdevs_operational": 3, 00:10:23.680 "base_bdevs_list": [ 00:10:23.680 { 00:10:23.680 "name": "BaseBdev1", 00:10:23.680 "uuid": "2456651a-a0ec-4e77-a185-f30a709c9ace", 00:10:23.680 "is_configured": true, 00:10:23.680 "data_offset": 2048, 00:10:23.680 "data_size": 63488 00:10:23.680 }, 00:10:23.680 { 00:10:23.680 "name": null, 00:10:23.680 "uuid": "f032afeb-4dfe-4374-8fb3-405fb7f3512a", 00:10:23.680 "is_configured": false, 00:10:23.680 "data_offset": 0, 00:10:23.680 "data_size": 63488 00:10:23.680 }, 00:10:23.680 { 00:10:23.680 "name": "BaseBdev3", 00:10:23.680 "uuid": "3388f2ba-d94c-42c4-94d4-fa13c2bdc44d", 00:10:23.680 "is_configured": true, 00:10:23.680 "data_offset": 2048, 00:10:23.680 "data_size": 63488 00:10:23.680 } 00:10:23.680 ] 00:10:23.680 }' 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.680 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.247 [2024-11-29 07:42:13.961590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.247 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.248 "name": "Existed_Raid", 00:10:24.248 "uuid": "b025f2c0-958b-4756-962e-049c85714e46", 00:10:24.248 "strip_size_kb": 0, 00:10:24.248 "state": "configuring", 00:10:24.248 "raid_level": "raid1", 00:10:24.248 "superblock": true, 00:10:24.248 "num_base_bdevs": 3, 00:10:24.248 "num_base_bdevs_discovered": 1, 00:10:24.248 "num_base_bdevs_operational": 3, 00:10:24.248 "base_bdevs_list": [ 00:10:24.248 { 00:10:24.248 "name": "BaseBdev1", 00:10:24.248 "uuid": "2456651a-a0ec-4e77-a185-f30a709c9ace", 00:10:24.248 "is_configured": true, 00:10:24.248 "data_offset": 2048, 00:10:24.248 "data_size": 63488 00:10:24.248 }, 00:10:24.248 { 00:10:24.248 "name": null, 00:10:24.248 "uuid": "f032afeb-4dfe-4374-8fb3-405fb7f3512a", 00:10:24.248 "is_configured": false, 00:10:24.248 "data_offset": 0, 00:10:24.248 "data_size": 63488 00:10:24.248 }, 00:10:24.248 { 00:10:24.248 "name": null, 00:10:24.248 "uuid": "3388f2ba-d94c-42c4-94d4-fa13c2bdc44d", 00:10:24.248 "is_configured": false, 00:10:24.248 "data_offset": 0, 00:10:24.248 "data_size": 63488 00:10:24.248 } 00:10:24.248 ] 00:10:24.248 }' 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.248 07:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.507 [2024-11-29 07:42:14.432829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.507 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.766 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.766 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.766 "name": "Existed_Raid", 00:10:24.766 "uuid": "b025f2c0-958b-4756-962e-049c85714e46", 00:10:24.766 "strip_size_kb": 0, 00:10:24.766 "state": "configuring", 00:10:24.766 "raid_level": "raid1", 00:10:24.766 "superblock": true, 00:10:24.766 "num_base_bdevs": 3, 00:10:24.766 "num_base_bdevs_discovered": 2, 00:10:24.766 "num_base_bdevs_operational": 3, 00:10:24.766 "base_bdevs_list": [ 00:10:24.766 { 00:10:24.766 "name": "BaseBdev1", 00:10:24.766 "uuid": "2456651a-a0ec-4e77-a185-f30a709c9ace", 00:10:24.766 "is_configured": true, 00:10:24.766 "data_offset": 2048, 00:10:24.766 "data_size": 63488 00:10:24.766 }, 00:10:24.766 { 00:10:24.766 "name": null, 00:10:24.766 "uuid": "f032afeb-4dfe-4374-8fb3-405fb7f3512a", 00:10:24.766 "is_configured": false, 00:10:24.766 "data_offset": 0, 00:10:24.766 "data_size": 63488 00:10:24.766 }, 00:10:24.766 { 00:10:24.766 "name": "BaseBdev3", 00:10:24.766 "uuid": "3388f2ba-d94c-42c4-94d4-fa13c2bdc44d", 00:10:24.766 "is_configured": true, 00:10:24.766 "data_offset": 2048, 00:10:24.766 "data_size": 63488 00:10:24.766 } 00:10:24.766 ] 00:10:24.766 }' 00:10:24.766 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.766 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.026 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.026 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.026 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:25.026 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.026 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.026 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:25.026 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:25.026 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.026 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.026 [2024-11-29 07:42:14.896023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.290 07:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.290 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.290 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.290 "name": "Existed_Raid", 00:10:25.290 "uuid": "b025f2c0-958b-4756-962e-049c85714e46", 00:10:25.290 "strip_size_kb": 0, 00:10:25.290 "state": "configuring", 00:10:25.290 "raid_level": "raid1", 00:10:25.290 "superblock": true, 00:10:25.290 "num_base_bdevs": 3, 00:10:25.290 "num_base_bdevs_discovered": 1, 00:10:25.290 "num_base_bdevs_operational": 3, 00:10:25.290 "base_bdevs_list": [ 00:10:25.290 { 00:10:25.290 "name": null, 00:10:25.290 "uuid": "2456651a-a0ec-4e77-a185-f30a709c9ace", 00:10:25.290 "is_configured": false, 00:10:25.290 "data_offset": 0, 00:10:25.290 "data_size": 63488 00:10:25.290 }, 00:10:25.290 { 00:10:25.290 "name": null, 00:10:25.290 "uuid": "f032afeb-4dfe-4374-8fb3-405fb7f3512a", 00:10:25.290 "is_configured": false, 00:10:25.290 "data_offset": 0, 00:10:25.290 "data_size": 63488 00:10:25.290 }, 00:10:25.290 { 00:10:25.290 "name": "BaseBdev3", 00:10:25.290 "uuid": "3388f2ba-d94c-42c4-94d4-fa13c2bdc44d", 00:10:25.290 "is_configured": true, 00:10:25.290 "data_offset": 2048, 00:10:25.290 "data_size": 63488 00:10:25.290 } 00:10:25.290 ] 00:10:25.290 }' 00:10:25.290 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.290 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.550 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.550 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:25.550 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.550 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.550 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.550 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:25.550 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:25.550 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.550 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.551 [2024-11-29 07:42:15.391539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.551 "name": "Existed_Raid", 00:10:25.551 "uuid": "b025f2c0-958b-4756-962e-049c85714e46", 00:10:25.551 "strip_size_kb": 0, 00:10:25.551 "state": "configuring", 00:10:25.551 "raid_level": "raid1", 00:10:25.551 "superblock": true, 00:10:25.551 "num_base_bdevs": 3, 00:10:25.551 "num_base_bdevs_discovered": 2, 00:10:25.551 "num_base_bdevs_operational": 3, 00:10:25.551 "base_bdevs_list": [ 00:10:25.551 { 00:10:25.551 "name": null, 00:10:25.551 "uuid": "2456651a-a0ec-4e77-a185-f30a709c9ace", 00:10:25.551 "is_configured": false, 00:10:25.551 "data_offset": 0, 00:10:25.551 "data_size": 63488 00:10:25.551 }, 00:10:25.551 { 00:10:25.551 "name": "BaseBdev2", 00:10:25.551 "uuid": "f032afeb-4dfe-4374-8fb3-405fb7f3512a", 00:10:25.551 "is_configured": true, 00:10:25.551 "data_offset": 2048, 00:10:25.551 "data_size": 63488 00:10:25.551 }, 00:10:25.551 { 00:10:25.551 "name": "BaseBdev3", 00:10:25.551 "uuid": "3388f2ba-d94c-42c4-94d4-fa13c2bdc44d", 00:10:25.551 "is_configured": true, 00:10:25.551 "data_offset": 2048, 00:10:25.551 "data_size": 63488 00:10:25.551 } 00:10:25.551 ] 00:10:25.551 }' 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.551 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2456651a-a0ec-4e77-a185-f30a709c9ace 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.121 [2024-11-29 07:42:15.910197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:26.121 [2024-11-29 07:42:15.910404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:26.121 [2024-11-29 07:42:15.910416] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:26.121 [2024-11-29 07:42:15.910661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:26.121 NewBaseBdev 00:10:26.121 [2024-11-29 07:42:15.910824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:26.121 [2024-11-29 07:42:15.910842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:26.121 [2024-11-29 07:42:15.910971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.121 [ 00:10:26.121 { 00:10:26.121 "name": "NewBaseBdev", 00:10:26.121 "aliases": [ 00:10:26.121 "2456651a-a0ec-4e77-a185-f30a709c9ace" 00:10:26.121 ], 00:10:26.121 "product_name": "Malloc disk", 00:10:26.121 "block_size": 512, 00:10:26.121 "num_blocks": 65536, 00:10:26.121 "uuid": "2456651a-a0ec-4e77-a185-f30a709c9ace", 00:10:26.121 "assigned_rate_limits": { 00:10:26.121 "rw_ios_per_sec": 0, 00:10:26.121 "rw_mbytes_per_sec": 0, 00:10:26.121 "r_mbytes_per_sec": 0, 00:10:26.121 "w_mbytes_per_sec": 0 00:10:26.121 }, 00:10:26.121 "claimed": true, 00:10:26.121 "claim_type": "exclusive_write", 00:10:26.121 "zoned": false, 00:10:26.121 "supported_io_types": { 00:10:26.121 "read": true, 00:10:26.121 "write": true, 00:10:26.121 "unmap": true, 00:10:26.121 "flush": true, 00:10:26.121 "reset": true, 00:10:26.121 "nvme_admin": false, 00:10:26.121 "nvme_io": false, 00:10:26.121 "nvme_io_md": false, 00:10:26.121 "write_zeroes": true, 00:10:26.121 "zcopy": true, 00:10:26.121 "get_zone_info": false, 00:10:26.121 "zone_management": false, 00:10:26.121 "zone_append": false, 00:10:26.121 "compare": false, 00:10:26.121 "compare_and_write": false, 00:10:26.121 "abort": true, 00:10:26.121 "seek_hole": false, 00:10:26.121 "seek_data": false, 00:10:26.121 "copy": true, 00:10:26.121 "nvme_iov_md": false 00:10:26.121 }, 00:10:26.121 "memory_domains": [ 00:10:26.121 { 00:10:26.121 "dma_device_id": "system", 00:10:26.121 "dma_device_type": 1 00:10:26.121 }, 00:10:26.121 { 00:10:26.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.121 "dma_device_type": 2 00:10:26.121 } 00:10:26.121 ], 00:10:26.121 "driver_specific": {} 00:10:26.121 } 00:10:26.121 ] 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.121 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.122 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.122 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.122 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.122 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.122 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.122 "name": "Existed_Raid", 00:10:26.122 "uuid": "b025f2c0-958b-4756-962e-049c85714e46", 00:10:26.122 "strip_size_kb": 0, 00:10:26.122 "state": "online", 00:10:26.122 "raid_level": "raid1", 00:10:26.122 "superblock": true, 00:10:26.122 "num_base_bdevs": 3, 00:10:26.122 "num_base_bdevs_discovered": 3, 00:10:26.122 "num_base_bdevs_operational": 3, 00:10:26.122 "base_bdevs_list": [ 00:10:26.122 { 00:10:26.122 "name": "NewBaseBdev", 00:10:26.122 "uuid": "2456651a-a0ec-4e77-a185-f30a709c9ace", 00:10:26.122 "is_configured": true, 00:10:26.122 "data_offset": 2048, 00:10:26.122 "data_size": 63488 00:10:26.122 }, 00:10:26.122 { 00:10:26.122 "name": "BaseBdev2", 00:10:26.122 "uuid": "f032afeb-4dfe-4374-8fb3-405fb7f3512a", 00:10:26.122 "is_configured": true, 00:10:26.122 "data_offset": 2048, 00:10:26.122 "data_size": 63488 00:10:26.122 }, 00:10:26.122 { 00:10:26.122 "name": "BaseBdev3", 00:10:26.122 "uuid": "3388f2ba-d94c-42c4-94d4-fa13c2bdc44d", 00:10:26.122 "is_configured": true, 00:10:26.122 "data_offset": 2048, 00:10:26.122 "data_size": 63488 00:10:26.122 } 00:10:26.122 ] 00:10:26.122 }' 00:10:26.122 07:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.122 07:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.691 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:26.691 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:26.691 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.691 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.691 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.691 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.691 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:26.691 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.691 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.691 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.691 [2024-11-29 07:42:16.397676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.691 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.691 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.691 "name": "Existed_Raid", 00:10:26.691 "aliases": [ 00:10:26.691 "b025f2c0-958b-4756-962e-049c85714e46" 00:10:26.691 ], 00:10:26.691 "product_name": "Raid Volume", 00:10:26.691 "block_size": 512, 00:10:26.691 "num_blocks": 63488, 00:10:26.691 "uuid": "b025f2c0-958b-4756-962e-049c85714e46", 00:10:26.691 "assigned_rate_limits": { 00:10:26.691 "rw_ios_per_sec": 0, 00:10:26.691 "rw_mbytes_per_sec": 0, 00:10:26.691 "r_mbytes_per_sec": 0, 00:10:26.691 "w_mbytes_per_sec": 0 00:10:26.691 }, 00:10:26.691 "claimed": false, 00:10:26.691 "zoned": false, 00:10:26.691 "supported_io_types": { 00:10:26.691 "read": true, 00:10:26.691 "write": true, 00:10:26.691 "unmap": false, 00:10:26.691 "flush": false, 00:10:26.691 "reset": true, 00:10:26.691 "nvme_admin": false, 00:10:26.691 "nvme_io": false, 00:10:26.691 "nvme_io_md": false, 00:10:26.691 "write_zeroes": true, 00:10:26.691 "zcopy": false, 00:10:26.691 "get_zone_info": false, 00:10:26.691 "zone_management": false, 00:10:26.691 "zone_append": false, 00:10:26.691 "compare": false, 00:10:26.691 "compare_and_write": false, 00:10:26.691 "abort": false, 00:10:26.691 "seek_hole": false, 00:10:26.691 "seek_data": false, 00:10:26.691 "copy": false, 00:10:26.691 "nvme_iov_md": false 00:10:26.691 }, 00:10:26.691 "memory_domains": [ 00:10:26.691 { 00:10:26.691 "dma_device_id": "system", 00:10:26.691 "dma_device_type": 1 00:10:26.691 }, 00:10:26.691 { 00:10:26.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.691 "dma_device_type": 2 00:10:26.691 }, 00:10:26.691 { 00:10:26.691 "dma_device_id": "system", 00:10:26.691 "dma_device_type": 1 00:10:26.691 }, 00:10:26.691 { 00:10:26.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.691 "dma_device_type": 2 00:10:26.691 }, 00:10:26.691 { 00:10:26.691 "dma_device_id": "system", 00:10:26.691 "dma_device_type": 1 00:10:26.691 }, 00:10:26.691 { 00:10:26.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.691 "dma_device_type": 2 00:10:26.691 } 00:10:26.691 ], 00:10:26.691 "driver_specific": { 00:10:26.691 "raid": { 00:10:26.691 "uuid": "b025f2c0-958b-4756-962e-049c85714e46", 00:10:26.692 "strip_size_kb": 0, 00:10:26.692 "state": "online", 00:10:26.692 "raid_level": "raid1", 00:10:26.692 "superblock": true, 00:10:26.692 "num_base_bdevs": 3, 00:10:26.692 "num_base_bdevs_discovered": 3, 00:10:26.692 "num_base_bdevs_operational": 3, 00:10:26.692 "base_bdevs_list": [ 00:10:26.692 { 00:10:26.692 "name": "NewBaseBdev", 00:10:26.692 "uuid": "2456651a-a0ec-4e77-a185-f30a709c9ace", 00:10:26.692 "is_configured": true, 00:10:26.692 "data_offset": 2048, 00:10:26.692 "data_size": 63488 00:10:26.692 }, 00:10:26.692 { 00:10:26.692 "name": "BaseBdev2", 00:10:26.692 "uuid": "f032afeb-4dfe-4374-8fb3-405fb7f3512a", 00:10:26.692 "is_configured": true, 00:10:26.692 "data_offset": 2048, 00:10:26.692 "data_size": 63488 00:10:26.692 }, 00:10:26.692 { 00:10:26.692 "name": "BaseBdev3", 00:10:26.692 "uuid": "3388f2ba-d94c-42c4-94d4-fa13c2bdc44d", 00:10:26.692 "is_configured": true, 00:10:26.692 "data_offset": 2048, 00:10:26.692 "data_size": 63488 00:10:26.692 } 00:10:26.692 ] 00:10:26.692 } 00:10:26.692 } 00:10:26.692 }' 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:26.692 BaseBdev2 00:10:26.692 BaseBdev3' 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.692 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.951 [2024-11-29 07:42:16.680915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.951 [2024-11-29 07:42:16.680954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.951 [2024-11-29 07:42:16.681031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.951 [2024-11-29 07:42:16.681328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.951 [2024-11-29 07:42:16.681347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67812 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67812 ']' 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67812 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67812 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.951 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.951 killing process with pid 67812 00:10:26.952 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67812' 00:10:26.952 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67812 00:10:26.952 [2024-11-29 07:42:16.726375] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.952 07:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67812 00:10:27.211 [2024-11-29 07:42:17.017534] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.592 07:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:28.592 00:10:28.592 real 0m10.082s 00:10:28.592 user 0m16.081s 00:10:28.592 sys 0m1.657s 00:10:28.592 07:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.592 07:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.592 ************************************ 00:10:28.592 END TEST raid_state_function_test_sb 00:10:28.592 ************************************ 00:10:28.592 07:42:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:28.592 07:42:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:28.592 07:42:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.592 07:42:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.592 ************************************ 00:10:28.592 START TEST raid_superblock_test 00:10:28.592 ************************************ 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68427 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68427 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68427 ']' 00:10:28.592 07:42:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.593 07:42:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.593 07:42:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.593 07:42:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.593 07:42:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.593 [2024-11-29 07:42:18.264400] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:28.593 [2024-11-29 07:42:18.264516] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68427 ] 00:10:28.593 [2024-11-29 07:42:18.435798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.863 [2024-11-29 07:42:18.547427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.863 [2024-11-29 07:42:18.736833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.863 [2024-11-29 07:42:18.736893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.445 malloc1 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.445 [2024-11-29 07:42:19.142994] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:29.445 [2024-11-29 07:42:19.143055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.445 [2024-11-29 07:42:19.143090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:29.445 [2024-11-29 07:42:19.143100] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.445 [2024-11-29 07:42:19.145259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.445 [2024-11-29 07:42:19.145293] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:29.445 pt1 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.445 malloc2 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.445 [2024-11-29 07:42:19.197407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:29.445 [2024-11-29 07:42:19.197463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.445 [2024-11-29 07:42:19.197503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:29.445 [2024-11-29 07:42:19.197512] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.445 [2024-11-29 07:42:19.199562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.445 [2024-11-29 07:42:19.199602] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:29.445 pt2 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.445 malloc3 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.445 [2024-11-29 07:42:19.266548] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:29.445 [2024-11-29 07:42:19.266607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.445 [2024-11-29 07:42:19.266630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:29.445 [2024-11-29 07:42:19.266639] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.445 [2024-11-29 07:42:19.268807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.445 [2024-11-29 07:42:19.268845] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:29.445 pt3 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.445 [2024-11-29 07:42:19.278554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:29.445 [2024-11-29 07:42:19.280363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:29.445 [2024-11-29 07:42:19.280438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:29.445 [2024-11-29 07:42:19.280607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:29.445 [2024-11-29 07:42:19.280635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:29.445 [2024-11-29 07:42:19.280910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:29.445 [2024-11-29 07:42:19.281129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:29.445 [2024-11-29 07:42:19.281149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:29.445 [2024-11-29 07:42:19.281312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.445 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.446 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.446 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.446 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.446 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.446 "name": "raid_bdev1", 00:10:29.446 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:29.446 "strip_size_kb": 0, 00:10:29.446 "state": "online", 00:10:29.446 "raid_level": "raid1", 00:10:29.446 "superblock": true, 00:10:29.446 "num_base_bdevs": 3, 00:10:29.446 "num_base_bdevs_discovered": 3, 00:10:29.446 "num_base_bdevs_operational": 3, 00:10:29.446 "base_bdevs_list": [ 00:10:29.446 { 00:10:29.446 "name": "pt1", 00:10:29.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.446 "is_configured": true, 00:10:29.446 "data_offset": 2048, 00:10:29.446 "data_size": 63488 00:10:29.446 }, 00:10:29.446 { 00:10:29.446 "name": "pt2", 00:10:29.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.446 "is_configured": true, 00:10:29.446 "data_offset": 2048, 00:10:29.446 "data_size": 63488 00:10:29.446 }, 00:10:29.446 { 00:10:29.446 "name": "pt3", 00:10:29.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:29.446 "is_configured": true, 00:10:29.446 "data_offset": 2048, 00:10:29.446 "data_size": 63488 00:10:29.446 } 00:10:29.446 ] 00:10:29.446 }' 00:10:29.446 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.446 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.016 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:30.016 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:30.016 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.016 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.016 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.016 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.016 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.016 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.016 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.016 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.016 [2024-11-29 07:42:19.682164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.016 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.016 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.016 "name": "raid_bdev1", 00:10:30.016 "aliases": [ 00:10:30.016 "5821c23f-67c4-4bf5-9726-14e8a8c77300" 00:10:30.016 ], 00:10:30.016 "product_name": "Raid Volume", 00:10:30.016 "block_size": 512, 00:10:30.016 "num_blocks": 63488, 00:10:30.016 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:30.016 "assigned_rate_limits": { 00:10:30.016 "rw_ios_per_sec": 0, 00:10:30.016 "rw_mbytes_per_sec": 0, 00:10:30.016 "r_mbytes_per_sec": 0, 00:10:30.016 "w_mbytes_per_sec": 0 00:10:30.016 }, 00:10:30.016 "claimed": false, 00:10:30.016 "zoned": false, 00:10:30.016 "supported_io_types": { 00:10:30.016 "read": true, 00:10:30.016 "write": true, 00:10:30.016 "unmap": false, 00:10:30.016 "flush": false, 00:10:30.016 "reset": true, 00:10:30.016 "nvme_admin": false, 00:10:30.016 "nvme_io": false, 00:10:30.016 "nvme_io_md": false, 00:10:30.016 "write_zeroes": true, 00:10:30.016 "zcopy": false, 00:10:30.016 "get_zone_info": false, 00:10:30.016 "zone_management": false, 00:10:30.016 "zone_append": false, 00:10:30.016 "compare": false, 00:10:30.016 "compare_and_write": false, 00:10:30.016 "abort": false, 00:10:30.016 "seek_hole": false, 00:10:30.016 "seek_data": false, 00:10:30.016 "copy": false, 00:10:30.016 "nvme_iov_md": false 00:10:30.016 }, 00:10:30.016 "memory_domains": [ 00:10:30.016 { 00:10:30.016 "dma_device_id": "system", 00:10:30.016 "dma_device_type": 1 00:10:30.016 }, 00:10:30.016 { 00:10:30.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.016 "dma_device_type": 2 00:10:30.016 }, 00:10:30.016 { 00:10:30.016 "dma_device_id": "system", 00:10:30.016 "dma_device_type": 1 00:10:30.016 }, 00:10:30.017 { 00:10:30.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.017 "dma_device_type": 2 00:10:30.017 }, 00:10:30.017 { 00:10:30.017 "dma_device_id": "system", 00:10:30.017 "dma_device_type": 1 00:10:30.017 }, 00:10:30.017 { 00:10:30.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.017 "dma_device_type": 2 00:10:30.017 } 00:10:30.017 ], 00:10:30.017 "driver_specific": { 00:10:30.017 "raid": { 00:10:30.017 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:30.017 "strip_size_kb": 0, 00:10:30.017 "state": "online", 00:10:30.017 "raid_level": "raid1", 00:10:30.017 "superblock": true, 00:10:30.017 "num_base_bdevs": 3, 00:10:30.017 "num_base_bdevs_discovered": 3, 00:10:30.017 "num_base_bdevs_operational": 3, 00:10:30.017 "base_bdevs_list": [ 00:10:30.017 { 00:10:30.017 "name": "pt1", 00:10:30.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.017 "is_configured": true, 00:10:30.017 "data_offset": 2048, 00:10:30.017 "data_size": 63488 00:10:30.017 }, 00:10:30.017 { 00:10:30.017 "name": "pt2", 00:10:30.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.017 "is_configured": true, 00:10:30.017 "data_offset": 2048, 00:10:30.017 "data_size": 63488 00:10:30.017 }, 00:10:30.017 { 00:10:30.017 "name": "pt3", 00:10:30.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:30.017 "is_configured": true, 00:10:30.017 "data_offset": 2048, 00:10:30.017 "data_size": 63488 00:10:30.017 } 00:10:30.017 ] 00:10:30.017 } 00:10:30.017 } 00:10:30.017 }' 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:30.017 pt2 00:10:30.017 pt3' 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.017 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.017 [2024-11-29 07:42:19.941620] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.278 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.278 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5821c23f-67c4-4bf5-9726-14e8a8c77300 00:10:30.278 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5821c23f-67c4-4bf5-9726-14e8a8c77300 ']' 00:10:30.278 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.278 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.278 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.278 [2024-11-29 07:42:19.985301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.278 [2024-11-29 07:42:19.985369] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.278 [2024-11-29 07:42:19.985464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.278 [2024-11-29 07:42:19.985555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.278 [2024-11-29 07:42:19.985623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:30.278 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.278 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.278 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.278 07:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.278 07:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.278 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.278 [2024-11-29 07:42:20.125129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:30.278 [2024-11-29 07:42:20.127061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:30.278 [2024-11-29 07:42:20.127187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:30.278 [2024-11-29 07:42:20.127284] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:30.278 [2024-11-29 07:42:20.127370] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:30.278 [2024-11-29 07:42:20.127391] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:30.278 [2024-11-29 07:42:20.127407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.278 [2024-11-29 07:42:20.127416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:30.278 request: 00:10:30.278 { 00:10:30.278 "name": "raid_bdev1", 00:10:30.278 "raid_level": "raid1", 00:10:30.278 "base_bdevs": [ 00:10:30.278 "malloc1", 00:10:30.278 "malloc2", 00:10:30.278 "malloc3" 00:10:30.278 ], 00:10:30.278 "superblock": false, 00:10:30.278 "method": "bdev_raid_create", 00:10:30.278 "req_id": 1 00:10:30.278 } 00:10:30.279 Got JSON-RPC error response 00:10:30.279 response: 00:10:30.279 { 00:10:30.279 "code": -17, 00:10:30.279 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:30.279 } 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.279 [2024-11-29 07:42:20.180959] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:30.279 [2024-11-29 07:42:20.181042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.279 [2024-11-29 07:42:20.181076] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:30.279 [2024-11-29 07:42:20.181109] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.279 [2024-11-29 07:42:20.183273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.279 [2024-11-29 07:42:20.183339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:30.279 [2024-11-29 07:42:20.183451] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:30.279 [2024-11-29 07:42:20.183525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:30.279 pt1 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.279 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.539 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.539 "name": "raid_bdev1", 00:10:30.539 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:30.539 "strip_size_kb": 0, 00:10:30.539 "state": "configuring", 00:10:30.539 "raid_level": "raid1", 00:10:30.539 "superblock": true, 00:10:30.539 "num_base_bdevs": 3, 00:10:30.539 "num_base_bdevs_discovered": 1, 00:10:30.539 "num_base_bdevs_operational": 3, 00:10:30.539 "base_bdevs_list": [ 00:10:30.539 { 00:10:30.539 "name": "pt1", 00:10:30.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.539 "is_configured": true, 00:10:30.539 "data_offset": 2048, 00:10:30.539 "data_size": 63488 00:10:30.539 }, 00:10:30.539 { 00:10:30.540 "name": null, 00:10:30.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.540 "is_configured": false, 00:10:30.540 "data_offset": 2048, 00:10:30.540 "data_size": 63488 00:10:30.540 }, 00:10:30.540 { 00:10:30.540 "name": null, 00:10:30.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:30.540 "is_configured": false, 00:10:30.540 "data_offset": 2048, 00:10:30.540 "data_size": 63488 00:10:30.540 } 00:10:30.540 ] 00:10:30.540 }' 00:10:30.540 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.540 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.800 [2024-11-29 07:42:20.612251] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:30.800 [2024-11-29 07:42:20.612377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.800 [2024-11-29 07:42:20.612430] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:30.800 [2024-11-29 07:42:20.612461] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.800 [2024-11-29 07:42:20.612976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.800 [2024-11-29 07:42:20.613047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:30.800 [2024-11-29 07:42:20.613189] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:30.800 [2024-11-29 07:42:20.613219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:30.800 pt2 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.800 [2024-11-29 07:42:20.624250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.800 "name": "raid_bdev1", 00:10:30.800 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:30.800 "strip_size_kb": 0, 00:10:30.800 "state": "configuring", 00:10:30.800 "raid_level": "raid1", 00:10:30.800 "superblock": true, 00:10:30.800 "num_base_bdevs": 3, 00:10:30.800 "num_base_bdevs_discovered": 1, 00:10:30.800 "num_base_bdevs_operational": 3, 00:10:30.800 "base_bdevs_list": [ 00:10:30.800 { 00:10:30.800 "name": "pt1", 00:10:30.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.800 "is_configured": true, 00:10:30.800 "data_offset": 2048, 00:10:30.800 "data_size": 63488 00:10:30.800 }, 00:10:30.800 { 00:10:30.800 "name": null, 00:10:30.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.800 "is_configured": false, 00:10:30.800 "data_offset": 0, 00:10:30.800 "data_size": 63488 00:10:30.800 }, 00:10:30.800 { 00:10:30.800 "name": null, 00:10:30.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:30.800 "is_configured": false, 00:10:30.800 "data_offset": 2048, 00:10:30.800 "data_size": 63488 00:10:30.800 } 00:10:30.800 ] 00:10:30.800 }' 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.800 07:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.373 [2024-11-29 07:42:21.019614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:31.373 [2024-11-29 07:42:21.019689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.373 [2024-11-29 07:42:21.019709] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:31.373 [2024-11-29 07:42:21.019719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.373 [2024-11-29 07:42:21.020201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.373 [2024-11-29 07:42:21.020230] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:31.373 [2024-11-29 07:42:21.020318] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:31.373 [2024-11-29 07:42:21.020355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:31.373 pt2 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.373 [2024-11-29 07:42:21.031568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:31.373 [2024-11-29 07:42:21.031625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.373 [2024-11-29 07:42:21.031639] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:31.373 [2024-11-29 07:42:21.031648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.373 [2024-11-29 07:42:21.032002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.373 [2024-11-29 07:42:21.032022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:31.373 [2024-11-29 07:42:21.032083] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:31.373 [2024-11-29 07:42:21.032133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:31.373 [2024-11-29 07:42:21.032249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:31.373 [2024-11-29 07:42:21.032268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:31.373 [2024-11-29 07:42:21.032503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:31.373 [2024-11-29 07:42:21.032652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:31.373 [2024-11-29 07:42:21.032661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:31.373 [2024-11-29 07:42:21.032806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.373 pt3 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.373 "name": "raid_bdev1", 00:10:31.373 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:31.373 "strip_size_kb": 0, 00:10:31.373 "state": "online", 00:10:31.373 "raid_level": "raid1", 00:10:31.373 "superblock": true, 00:10:31.373 "num_base_bdevs": 3, 00:10:31.373 "num_base_bdevs_discovered": 3, 00:10:31.373 "num_base_bdevs_operational": 3, 00:10:31.373 "base_bdevs_list": [ 00:10:31.373 { 00:10:31.373 "name": "pt1", 00:10:31.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.373 "is_configured": true, 00:10:31.373 "data_offset": 2048, 00:10:31.373 "data_size": 63488 00:10:31.373 }, 00:10:31.373 { 00:10:31.373 "name": "pt2", 00:10:31.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.373 "is_configured": true, 00:10:31.373 "data_offset": 2048, 00:10:31.373 "data_size": 63488 00:10:31.373 }, 00:10:31.373 { 00:10:31.373 "name": "pt3", 00:10:31.373 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.373 "is_configured": true, 00:10:31.373 "data_offset": 2048, 00:10:31.373 "data_size": 63488 00:10:31.373 } 00:10:31.373 ] 00:10:31.373 }' 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.373 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:31.636 [2024-11-29 07:42:21.451208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:31.636 "name": "raid_bdev1", 00:10:31.636 "aliases": [ 00:10:31.636 "5821c23f-67c4-4bf5-9726-14e8a8c77300" 00:10:31.636 ], 00:10:31.636 "product_name": "Raid Volume", 00:10:31.636 "block_size": 512, 00:10:31.636 "num_blocks": 63488, 00:10:31.636 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:31.636 "assigned_rate_limits": { 00:10:31.636 "rw_ios_per_sec": 0, 00:10:31.636 "rw_mbytes_per_sec": 0, 00:10:31.636 "r_mbytes_per_sec": 0, 00:10:31.636 "w_mbytes_per_sec": 0 00:10:31.636 }, 00:10:31.636 "claimed": false, 00:10:31.636 "zoned": false, 00:10:31.636 "supported_io_types": { 00:10:31.636 "read": true, 00:10:31.636 "write": true, 00:10:31.636 "unmap": false, 00:10:31.636 "flush": false, 00:10:31.636 "reset": true, 00:10:31.636 "nvme_admin": false, 00:10:31.636 "nvme_io": false, 00:10:31.636 "nvme_io_md": false, 00:10:31.636 "write_zeroes": true, 00:10:31.636 "zcopy": false, 00:10:31.636 "get_zone_info": false, 00:10:31.636 "zone_management": false, 00:10:31.636 "zone_append": false, 00:10:31.636 "compare": false, 00:10:31.636 "compare_and_write": false, 00:10:31.636 "abort": false, 00:10:31.636 "seek_hole": false, 00:10:31.636 "seek_data": false, 00:10:31.636 "copy": false, 00:10:31.636 "nvme_iov_md": false 00:10:31.636 }, 00:10:31.636 "memory_domains": [ 00:10:31.636 { 00:10:31.636 "dma_device_id": "system", 00:10:31.636 "dma_device_type": 1 00:10:31.636 }, 00:10:31.636 { 00:10:31.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.636 "dma_device_type": 2 00:10:31.636 }, 00:10:31.636 { 00:10:31.636 "dma_device_id": "system", 00:10:31.636 "dma_device_type": 1 00:10:31.636 }, 00:10:31.636 { 00:10:31.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.636 "dma_device_type": 2 00:10:31.636 }, 00:10:31.636 { 00:10:31.636 "dma_device_id": "system", 00:10:31.636 "dma_device_type": 1 00:10:31.636 }, 00:10:31.636 { 00:10:31.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.636 "dma_device_type": 2 00:10:31.636 } 00:10:31.636 ], 00:10:31.636 "driver_specific": { 00:10:31.636 "raid": { 00:10:31.636 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:31.636 "strip_size_kb": 0, 00:10:31.636 "state": "online", 00:10:31.636 "raid_level": "raid1", 00:10:31.636 "superblock": true, 00:10:31.636 "num_base_bdevs": 3, 00:10:31.636 "num_base_bdevs_discovered": 3, 00:10:31.636 "num_base_bdevs_operational": 3, 00:10:31.636 "base_bdevs_list": [ 00:10:31.636 { 00:10:31.636 "name": "pt1", 00:10:31.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.636 "is_configured": true, 00:10:31.636 "data_offset": 2048, 00:10:31.636 "data_size": 63488 00:10:31.636 }, 00:10:31.636 { 00:10:31.636 "name": "pt2", 00:10:31.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.636 "is_configured": true, 00:10:31.636 "data_offset": 2048, 00:10:31.636 "data_size": 63488 00:10:31.636 }, 00:10:31.636 { 00:10:31.636 "name": "pt3", 00:10:31.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.636 "is_configured": true, 00:10:31.636 "data_offset": 2048, 00:10:31.636 "data_size": 63488 00:10:31.636 } 00:10:31.636 ] 00:10:31.636 } 00:10:31.636 } 00:10:31.636 }' 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:31.636 pt2 00:10:31.636 pt3' 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.636 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:31.896 [2024-11-29 07:42:21.726682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5821c23f-67c4-4bf5-9726-14e8a8c77300 '!=' 5821c23f-67c4-4bf5-9726-14e8a8c77300 ']' 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.896 [2024-11-29 07:42:21.758359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.896 "name": "raid_bdev1", 00:10:31.896 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:31.896 "strip_size_kb": 0, 00:10:31.896 "state": "online", 00:10:31.896 "raid_level": "raid1", 00:10:31.896 "superblock": true, 00:10:31.896 "num_base_bdevs": 3, 00:10:31.896 "num_base_bdevs_discovered": 2, 00:10:31.896 "num_base_bdevs_operational": 2, 00:10:31.896 "base_bdevs_list": [ 00:10:31.896 { 00:10:31.896 "name": null, 00:10:31.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.896 "is_configured": false, 00:10:31.896 "data_offset": 0, 00:10:31.896 "data_size": 63488 00:10:31.896 }, 00:10:31.896 { 00:10:31.896 "name": "pt2", 00:10:31.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.896 "is_configured": true, 00:10:31.896 "data_offset": 2048, 00:10:31.896 "data_size": 63488 00:10:31.896 }, 00:10:31.896 { 00:10:31.896 "name": "pt3", 00:10:31.896 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.896 "is_configured": true, 00:10:31.896 "data_offset": 2048, 00:10:31.896 "data_size": 63488 00:10:31.896 } 00:10:31.896 ] 00:10:31.896 }' 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.896 07:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.466 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.466 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.466 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.466 [2024-11-29 07:42:22.161689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.466 [2024-11-29 07:42:22.161789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.466 [2024-11-29 07:42:22.161896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.466 [2024-11-29 07:42:22.161988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.466 [2024-11-29 07:42:22.162043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:32.466 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.466 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.467 [2024-11-29 07:42:22.245484] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:32.467 [2024-11-29 07:42:22.245539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.467 [2024-11-29 07:42:22.245571] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:32.467 [2024-11-29 07:42:22.245581] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.467 [2024-11-29 07:42:22.247775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.467 [2024-11-29 07:42:22.247817] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:32.467 [2024-11-29 07:42:22.247894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:32.467 [2024-11-29 07:42:22.247951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:32.467 pt2 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.467 "name": "raid_bdev1", 00:10:32.467 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:32.467 "strip_size_kb": 0, 00:10:32.467 "state": "configuring", 00:10:32.467 "raid_level": "raid1", 00:10:32.467 "superblock": true, 00:10:32.467 "num_base_bdevs": 3, 00:10:32.467 "num_base_bdevs_discovered": 1, 00:10:32.467 "num_base_bdevs_operational": 2, 00:10:32.467 "base_bdevs_list": [ 00:10:32.467 { 00:10:32.467 "name": null, 00:10:32.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.467 "is_configured": false, 00:10:32.467 "data_offset": 2048, 00:10:32.467 "data_size": 63488 00:10:32.467 }, 00:10:32.467 { 00:10:32.467 "name": "pt2", 00:10:32.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.467 "is_configured": true, 00:10:32.467 "data_offset": 2048, 00:10:32.467 "data_size": 63488 00:10:32.467 }, 00:10:32.467 { 00:10:32.467 "name": null, 00:10:32.467 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.467 "is_configured": false, 00:10:32.467 "data_offset": 2048, 00:10:32.467 "data_size": 63488 00:10:32.467 } 00:10:32.467 ] 00:10:32.467 }' 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.467 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.037 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:33.037 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:33.037 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:33.037 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:33.037 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.037 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.037 [2024-11-29 07:42:22.688771] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:33.037 [2024-11-29 07:42:22.688905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.037 [2024-11-29 07:42:22.688949] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:33.037 [2024-11-29 07:42:22.688987] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.037 [2024-11-29 07:42:22.689537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.037 [2024-11-29 07:42:22.689622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:33.037 [2024-11-29 07:42:22.689760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:33.037 [2024-11-29 07:42:22.689821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:33.037 [2024-11-29 07:42:22.689970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:33.037 [2024-11-29 07:42:22.690011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:33.037 [2024-11-29 07:42:22.690312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:33.037 [2024-11-29 07:42:22.690522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:33.037 [2024-11-29 07:42:22.690564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:33.037 [2024-11-29 07:42:22.690754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.037 pt3 00:10:33.037 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.037 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:33.037 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.037 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.038 "name": "raid_bdev1", 00:10:33.038 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:33.038 "strip_size_kb": 0, 00:10:33.038 "state": "online", 00:10:33.038 "raid_level": "raid1", 00:10:33.038 "superblock": true, 00:10:33.038 "num_base_bdevs": 3, 00:10:33.038 "num_base_bdevs_discovered": 2, 00:10:33.038 "num_base_bdevs_operational": 2, 00:10:33.038 "base_bdevs_list": [ 00:10:33.038 { 00:10:33.038 "name": null, 00:10:33.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.038 "is_configured": false, 00:10:33.038 "data_offset": 2048, 00:10:33.038 "data_size": 63488 00:10:33.038 }, 00:10:33.038 { 00:10:33.038 "name": "pt2", 00:10:33.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.038 "is_configured": true, 00:10:33.038 "data_offset": 2048, 00:10:33.038 "data_size": 63488 00:10:33.038 }, 00:10:33.038 { 00:10:33.038 "name": "pt3", 00:10:33.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.038 "is_configured": true, 00:10:33.038 "data_offset": 2048, 00:10:33.038 "data_size": 63488 00:10:33.038 } 00:10:33.038 ] 00:10:33.038 }' 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.038 07:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.297 [2024-11-29 07:42:23.139973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.297 [2024-11-29 07:42:23.140008] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.297 [2024-11-29 07:42:23.140090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.297 [2024-11-29 07:42:23.140165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.297 [2024-11-29 07:42:23.140175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.297 [2024-11-29 07:42:23.207844] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:33.297 [2024-11-29 07:42:23.207903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.297 [2024-11-29 07:42:23.207922] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:33.297 [2024-11-29 07:42:23.207931] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.297 [2024-11-29 07:42:23.210129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.297 [2024-11-29 07:42:23.210163] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:33.297 [2024-11-29 07:42:23.210244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:33.297 [2024-11-29 07:42:23.210287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:33.297 [2024-11-29 07:42:23.210404] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:33.297 [2024-11-29 07:42:23.210414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.297 [2024-11-29 07:42:23.210429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:33.297 [2024-11-29 07:42:23.210486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.297 pt1 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.297 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.556 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.556 "name": "raid_bdev1", 00:10:33.556 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:33.556 "strip_size_kb": 0, 00:10:33.556 "state": "configuring", 00:10:33.556 "raid_level": "raid1", 00:10:33.556 "superblock": true, 00:10:33.556 "num_base_bdevs": 3, 00:10:33.556 "num_base_bdevs_discovered": 1, 00:10:33.556 "num_base_bdevs_operational": 2, 00:10:33.556 "base_bdevs_list": [ 00:10:33.556 { 00:10:33.556 "name": null, 00:10:33.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.556 "is_configured": false, 00:10:33.556 "data_offset": 2048, 00:10:33.556 "data_size": 63488 00:10:33.556 }, 00:10:33.556 { 00:10:33.556 "name": "pt2", 00:10:33.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.556 "is_configured": true, 00:10:33.556 "data_offset": 2048, 00:10:33.556 "data_size": 63488 00:10:33.556 }, 00:10:33.556 { 00:10:33.556 "name": null, 00:10:33.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.556 "is_configured": false, 00:10:33.556 "data_offset": 2048, 00:10:33.556 "data_size": 63488 00:10:33.556 } 00:10:33.556 ] 00:10:33.556 }' 00:10:33.556 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.556 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.815 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:33.815 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.815 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.815 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:33.815 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.815 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:33.815 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:33.815 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.815 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.816 [2024-11-29 07:42:23.671090] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:33.816 [2024-11-29 07:42:23.671231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.816 [2024-11-29 07:42:23.671273] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:33.816 [2024-11-29 07:42:23.671305] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.816 [2024-11-29 07:42:23.671824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.816 [2024-11-29 07:42:23.671882] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:33.816 [2024-11-29 07:42:23.672006] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:33.816 [2024-11-29 07:42:23.672058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:33.816 [2024-11-29 07:42:23.672242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:33.816 [2024-11-29 07:42:23.672283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:33.816 [2024-11-29 07:42:23.672553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:33.816 [2024-11-29 07:42:23.672753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:33.816 [2024-11-29 07:42:23.672801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:33.816 [2024-11-29 07:42:23.672977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.816 pt3 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.816 "name": "raid_bdev1", 00:10:33.816 "uuid": "5821c23f-67c4-4bf5-9726-14e8a8c77300", 00:10:33.816 "strip_size_kb": 0, 00:10:33.816 "state": "online", 00:10:33.816 "raid_level": "raid1", 00:10:33.816 "superblock": true, 00:10:33.816 "num_base_bdevs": 3, 00:10:33.816 "num_base_bdevs_discovered": 2, 00:10:33.816 "num_base_bdevs_operational": 2, 00:10:33.816 "base_bdevs_list": [ 00:10:33.816 { 00:10:33.816 "name": null, 00:10:33.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.816 "is_configured": false, 00:10:33.816 "data_offset": 2048, 00:10:33.816 "data_size": 63488 00:10:33.816 }, 00:10:33.816 { 00:10:33.816 "name": "pt2", 00:10:33.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.816 "is_configured": true, 00:10:33.816 "data_offset": 2048, 00:10:33.816 "data_size": 63488 00:10:33.816 }, 00:10:33.816 { 00:10:33.816 "name": "pt3", 00:10:33.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.816 "is_configured": true, 00:10:33.816 "data_offset": 2048, 00:10:33.816 "data_size": 63488 00:10:33.816 } 00:10:33.816 ] 00:10:33.816 }' 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.816 07:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:34.384 [2024-11-29 07:42:24.154518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5821c23f-67c4-4bf5-9726-14e8a8c77300 '!=' 5821c23f-67c4-4bf5-9726-14e8a8c77300 ']' 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68427 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68427 ']' 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68427 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68427 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68427' 00:10:34.384 killing process with pid 68427 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68427 00:10:34.384 [2024-11-29 07:42:24.241278] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.384 [2024-11-29 07:42:24.241372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.384 [2024-11-29 07:42:24.241433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.384 [2024-11-29 07:42:24.241444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:34.384 07:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68427 00:10:34.643 [2024-11-29 07:42:24.537247] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.020 07:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:36.020 00:10:36.020 real 0m7.453s 00:10:36.020 user 0m11.646s 00:10:36.020 sys 0m1.267s 00:10:36.020 07:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.020 07:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.020 ************************************ 00:10:36.020 END TEST raid_superblock_test 00:10:36.020 ************************************ 00:10:36.020 07:42:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:36.020 07:42:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:36.020 07:42:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.020 07:42:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.020 ************************************ 00:10:36.020 START TEST raid_read_error_test 00:10:36.020 ************************************ 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CTJjhU7EmV 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68867 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68867 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68867 ']' 00:10:36.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.020 07:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.020 [2024-11-29 07:42:25.801561] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:36.020 [2024-11-29 07:42:25.801672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68867 ] 00:10:36.279 [2024-11-29 07:42:25.973451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.279 [2024-11-29 07:42:26.085256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.538 [2024-11-29 07:42:26.283145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.538 [2024-11-29 07:42:26.283191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.797 BaseBdev1_malloc 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.797 true 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.797 [2024-11-29 07:42:26.689414] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:36.797 [2024-11-29 07:42:26.689541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.797 [2024-11-29 07:42:26.689566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:36.797 [2024-11-29 07:42:26.689577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.797 [2024-11-29 07:42:26.691732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.797 [2024-11-29 07:42:26.691774] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:36.797 BaseBdev1 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.797 BaseBdev2_malloc 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.797 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 true 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 [2024-11-29 07:42:26.755380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:37.094 [2024-11-29 07:42:26.755483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.094 [2024-11-29 07:42:26.755504] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:37.094 [2024-11-29 07:42:26.755515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.094 [2024-11-29 07:42:26.757868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.094 [2024-11-29 07:42:26.757907] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:37.094 BaseBdev2 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 BaseBdev3_malloc 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 true 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 [2024-11-29 07:42:26.834821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:37.094 [2024-11-29 07:42:26.834871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.094 [2024-11-29 07:42:26.834902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:37.094 [2024-11-29 07:42:26.834912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.094 [2024-11-29 07:42:26.837037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.094 [2024-11-29 07:42:26.837151] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:37.094 BaseBdev3 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 [2024-11-29 07:42:26.846866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.094 [2024-11-29 07:42:26.848653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.094 [2024-11-29 07:42:26.848776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.094 [2024-11-29 07:42:26.848988] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:37.094 [2024-11-29 07:42:26.849002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:37.094 [2024-11-29 07:42:26.849249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:37.094 [2024-11-29 07:42:26.849429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:37.094 [2024-11-29 07:42:26.849447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:37.094 [2024-11-29 07:42:26.849591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.094 "name": "raid_bdev1", 00:10:37.094 "uuid": "c0732377-62a7-4f39-834e-3dac4e9a68e5", 00:10:37.094 "strip_size_kb": 0, 00:10:37.094 "state": "online", 00:10:37.094 "raid_level": "raid1", 00:10:37.094 "superblock": true, 00:10:37.094 "num_base_bdevs": 3, 00:10:37.094 "num_base_bdevs_discovered": 3, 00:10:37.094 "num_base_bdevs_operational": 3, 00:10:37.094 "base_bdevs_list": [ 00:10:37.094 { 00:10:37.094 "name": "BaseBdev1", 00:10:37.094 "uuid": "9f6709b8-c9e7-5c57-9354-284c045f1cea", 00:10:37.094 "is_configured": true, 00:10:37.094 "data_offset": 2048, 00:10:37.094 "data_size": 63488 00:10:37.094 }, 00:10:37.094 { 00:10:37.094 "name": "BaseBdev2", 00:10:37.094 "uuid": "5cf532fe-459f-5624-b573-c2509acd6854", 00:10:37.094 "is_configured": true, 00:10:37.094 "data_offset": 2048, 00:10:37.094 "data_size": 63488 00:10:37.094 }, 00:10:37.094 { 00:10:37.094 "name": "BaseBdev3", 00:10:37.094 "uuid": "e518d6ea-db59-5c01-9e36-148e5433b7dc", 00:10:37.094 "is_configured": true, 00:10:37.094 "data_offset": 2048, 00:10:37.094 "data_size": 63488 00:10:37.094 } 00:10:37.094 ] 00:10:37.094 }' 00:10:37.094 07:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.095 07:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.668 07:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:37.668 07:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:37.668 [2024-11-29 07:42:27.403032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.604 "name": "raid_bdev1", 00:10:38.604 "uuid": "c0732377-62a7-4f39-834e-3dac4e9a68e5", 00:10:38.604 "strip_size_kb": 0, 00:10:38.604 "state": "online", 00:10:38.604 "raid_level": "raid1", 00:10:38.604 "superblock": true, 00:10:38.604 "num_base_bdevs": 3, 00:10:38.604 "num_base_bdevs_discovered": 3, 00:10:38.604 "num_base_bdevs_operational": 3, 00:10:38.604 "base_bdevs_list": [ 00:10:38.604 { 00:10:38.604 "name": "BaseBdev1", 00:10:38.604 "uuid": "9f6709b8-c9e7-5c57-9354-284c045f1cea", 00:10:38.604 "is_configured": true, 00:10:38.604 "data_offset": 2048, 00:10:38.604 "data_size": 63488 00:10:38.604 }, 00:10:38.604 { 00:10:38.604 "name": "BaseBdev2", 00:10:38.604 "uuid": "5cf532fe-459f-5624-b573-c2509acd6854", 00:10:38.604 "is_configured": true, 00:10:38.604 "data_offset": 2048, 00:10:38.604 "data_size": 63488 00:10:38.604 }, 00:10:38.604 { 00:10:38.604 "name": "BaseBdev3", 00:10:38.604 "uuid": "e518d6ea-db59-5c01-9e36-148e5433b7dc", 00:10:38.604 "is_configured": true, 00:10:38.604 "data_offset": 2048, 00:10:38.604 "data_size": 63488 00:10:38.604 } 00:10:38.604 ] 00:10:38.604 }' 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.604 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.862 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.862 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.862 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.121 [2024-11-29 07:42:28.806109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:39.121 [2024-11-29 07:42:28.806157] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.121 [2024-11-29 07:42:28.809413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.121 [2024-11-29 07:42:28.809502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.121 [2024-11-29 07:42:28.809624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.121 [2024-11-29 07:42:28.809689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:39.121 { 00:10:39.121 "results": [ 00:10:39.121 { 00:10:39.121 "job": "raid_bdev1", 00:10:39.121 "core_mask": "0x1", 00:10:39.121 "workload": "randrw", 00:10:39.121 "percentage": 50, 00:10:39.121 "status": "finished", 00:10:39.121 "queue_depth": 1, 00:10:39.121 "io_size": 131072, 00:10:39.121 "runtime": 1.404213, 00:10:39.121 "iops": 13581.272926543195, 00:10:39.121 "mibps": 1697.6591158178994, 00:10:39.121 "io_failed": 0, 00:10:39.121 "io_timeout": 0, 00:10:39.121 "avg_latency_us": 70.9947573065852, 00:10:39.121 "min_latency_us": 23.699563318777294, 00:10:39.121 "max_latency_us": 1430.9170305676855 00:10:39.121 } 00:10:39.121 ], 00:10:39.121 "core_count": 1 00:10:39.121 } 00:10:39.121 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.121 07:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68867 00:10:39.121 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68867 ']' 00:10:39.121 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68867 00:10:39.121 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:39.121 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.121 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68867 00:10:39.121 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.121 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.121 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68867' 00:10:39.121 killing process with pid 68867 00:10:39.121 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68867 00:10:39.121 [2024-11-29 07:42:28.857302] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.121 07:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68867 00:10:39.380 [2024-11-29 07:42:29.086524] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.317 07:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CTJjhU7EmV 00:10:40.317 07:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:40.317 07:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:40.576 07:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:40.576 07:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:40.576 ************************************ 00:10:40.576 END TEST raid_read_error_test 00:10:40.576 ************************************ 00:10:40.576 07:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.576 07:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:40.576 07:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:40.576 00:10:40.576 real 0m4.575s 00:10:40.576 user 0m5.476s 00:10:40.576 sys 0m0.563s 00:10:40.576 07:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.576 07:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.576 07:42:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:40.576 07:42:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:40.576 07:42:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.576 07:42:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.576 ************************************ 00:10:40.576 START TEST raid_write_error_test 00:10:40.576 ************************************ 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:40.576 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:40.577 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:40.577 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:40.577 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BbWgcA3Jr8 00:10:40.577 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69013 00:10:40.577 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:40.577 07:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69013 00:10:40.577 07:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69013 ']' 00:10:40.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.577 07:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.577 07:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.577 07:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.577 07:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.577 07:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.577 [2024-11-29 07:42:30.446770] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:40.577 [2024-11-29 07:42:30.446887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69013 ] 00:10:40.835 [2024-11-29 07:42:30.618260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.835 [2024-11-29 07:42:30.730530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.094 [2024-11-29 07:42:30.927477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.094 [2024-11-29 07:42:30.927509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.352 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.352 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:41.352 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:41.352 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:41.352 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.352 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.612 BaseBdev1_malloc 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.612 true 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.612 [2024-11-29 07:42:31.336061] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:41.612 [2024-11-29 07:42:31.336173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.612 [2024-11-29 07:42:31.336198] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:41.612 [2024-11-29 07:42:31.336209] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.612 [2024-11-29 07:42:31.338288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.612 [2024-11-29 07:42:31.338339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:41.612 BaseBdev1 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.612 BaseBdev2_malloc 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.612 true 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.612 [2024-11-29 07:42:31.401581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:41.612 [2024-11-29 07:42:31.401635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.612 [2024-11-29 07:42:31.401651] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:41.612 [2024-11-29 07:42:31.401661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.612 [2024-11-29 07:42:31.403741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.612 [2024-11-29 07:42:31.403851] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:41.612 BaseBdev2 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.612 BaseBdev3_malloc 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.612 true 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.612 [2024-11-29 07:42:31.480863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:41.612 [2024-11-29 07:42:31.480959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.612 [2024-11-29 07:42:31.480998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:41.612 [2024-11-29 07:42:31.481009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.612 [2024-11-29 07:42:31.483131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.612 [2024-11-29 07:42:31.483170] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:41.612 BaseBdev3 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.612 [2024-11-29 07:42:31.492907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.612 [2024-11-29 07:42:31.494693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.612 [2024-11-29 07:42:31.494770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.612 [2024-11-29 07:42:31.494980] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:41.612 [2024-11-29 07:42:31.494993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:41.612 [2024-11-29 07:42:31.495238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:41.612 [2024-11-29 07:42:31.495411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:41.612 [2024-11-29 07:42:31.495421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:41.612 [2024-11-29 07:42:31.495552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.612 "name": "raid_bdev1", 00:10:41.612 "uuid": "e5aa67b4-06a5-4939-a227-01e889ba2ad9", 00:10:41.612 "strip_size_kb": 0, 00:10:41.612 "state": "online", 00:10:41.612 "raid_level": "raid1", 00:10:41.612 "superblock": true, 00:10:41.612 "num_base_bdevs": 3, 00:10:41.612 "num_base_bdevs_discovered": 3, 00:10:41.612 "num_base_bdevs_operational": 3, 00:10:41.612 "base_bdevs_list": [ 00:10:41.612 { 00:10:41.612 "name": "BaseBdev1", 00:10:41.612 "uuid": "8b8dfe09-31e3-5a93-b472-460be3ab5e7f", 00:10:41.612 "is_configured": true, 00:10:41.612 "data_offset": 2048, 00:10:41.612 "data_size": 63488 00:10:41.612 }, 00:10:41.612 { 00:10:41.612 "name": "BaseBdev2", 00:10:41.612 "uuid": "a5c39ff4-1766-52ca-a2f0-923e3cef7ea9", 00:10:41.612 "is_configured": true, 00:10:41.612 "data_offset": 2048, 00:10:41.612 "data_size": 63488 00:10:41.612 }, 00:10:41.612 { 00:10:41.612 "name": "BaseBdev3", 00:10:41.612 "uuid": "2b719540-aba9-5d8b-b856-14547561bc77", 00:10:41.612 "is_configured": true, 00:10:41.612 "data_offset": 2048, 00:10:41.612 "data_size": 63488 00:10:41.612 } 00:10:41.612 ] 00:10:41.612 }' 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.612 07:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.182 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:42.182 07:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:42.182 [2024-11-29 07:42:32.013520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.121 [2024-11-29 07:42:32.932921] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:43.121 [2024-11-29 07:42:32.932979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.121 [2024-11-29 07:42:32.933200] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.121 "name": "raid_bdev1", 00:10:43.121 "uuid": "e5aa67b4-06a5-4939-a227-01e889ba2ad9", 00:10:43.121 "strip_size_kb": 0, 00:10:43.121 "state": "online", 00:10:43.121 "raid_level": "raid1", 00:10:43.121 "superblock": true, 00:10:43.121 "num_base_bdevs": 3, 00:10:43.121 "num_base_bdevs_discovered": 2, 00:10:43.121 "num_base_bdevs_operational": 2, 00:10:43.121 "base_bdevs_list": [ 00:10:43.121 { 00:10:43.121 "name": null, 00:10:43.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.121 "is_configured": false, 00:10:43.121 "data_offset": 0, 00:10:43.121 "data_size": 63488 00:10:43.121 }, 00:10:43.121 { 00:10:43.121 "name": "BaseBdev2", 00:10:43.121 "uuid": "a5c39ff4-1766-52ca-a2f0-923e3cef7ea9", 00:10:43.121 "is_configured": true, 00:10:43.121 "data_offset": 2048, 00:10:43.121 "data_size": 63488 00:10:43.121 }, 00:10:43.121 { 00:10:43.121 "name": "BaseBdev3", 00:10:43.121 "uuid": "2b719540-aba9-5d8b-b856-14547561bc77", 00:10:43.121 "is_configured": true, 00:10:43.121 "data_offset": 2048, 00:10:43.121 "data_size": 63488 00:10:43.121 } 00:10:43.121 ] 00:10:43.121 }' 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.121 07:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.691 [2024-11-29 07:42:33.399181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:43.691 [2024-11-29 07:42:33.399290] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.691 [2024-11-29 07:42:33.401993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.691 [2024-11-29 07:42:33.402119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.691 [2024-11-29 07:42:33.402237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.691 [2024-11-29 07:42:33.402288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:43.691 { 00:10:43.691 "results": [ 00:10:43.691 { 00:10:43.691 "job": "raid_bdev1", 00:10:43.691 "core_mask": "0x1", 00:10:43.691 "workload": "randrw", 00:10:43.691 "percentage": 50, 00:10:43.691 "status": "finished", 00:10:43.691 "queue_depth": 1, 00:10:43.691 "io_size": 131072, 00:10:43.691 "runtime": 1.386668, 00:10:43.691 "iops": 14669.697432983237, 00:10:43.691 "mibps": 1833.7121791229047, 00:10:43.691 "io_failed": 0, 00:10:43.691 "io_timeout": 0, 00:10:43.691 "avg_latency_us": 65.51444602966136, 00:10:43.691 "min_latency_us": 23.475982532751093, 00:10:43.691 "max_latency_us": 1395.1441048034935 00:10:43.691 } 00:10:43.691 ], 00:10:43.691 "core_count": 1 00:10:43.691 } 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69013 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69013 ']' 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69013 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69013 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69013' 00:10:43.691 killing process with pid 69013 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69013 00:10:43.691 [2024-11-29 07:42:33.446306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:43.691 07:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69013 00:10:43.951 [2024-11-29 07:42:33.673097] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.333 07:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:45.333 07:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BbWgcA3Jr8 00:10:45.333 07:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:45.333 07:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:45.333 07:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:45.333 07:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.333 07:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:45.333 07:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:45.333 00:10:45.333 real 0m4.509s 00:10:45.333 user 0m5.352s 00:10:45.333 sys 0m0.564s 00:10:45.333 07:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.333 ************************************ 00:10:45.333 END TEST raid_write_error_test 00:10:45.333 ************************************ 00:10:45.333 07:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.333 07:42:34 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:45.333 07:42:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:45.333 07:42:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:45.334 07:42:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:45.334 07:42:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.334 07:42:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.334 ************************************ 00:10:45.334 START TEST raid_state_function_test 00:10:45.334 ************************************ 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:45.334 Process raid pid: 69156 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69156 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69156' 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69156 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69156 ']' 00:10:45.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.334 07:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.334 [2024-11-29 07:42:35.015450] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:45.334 [2024-11-29 07:42:35.015657] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.334 [2024-11-29 07:42:35.194271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.596 [2024-11-29 07:42:35.304643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.596 [2024-11-29 07:42:35.499876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.596 [2024-11-29 07:42:35.500001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.167 [2024-11-29 07:42:35.837788] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.167 [2024-11-29 07:42:35.837846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.167 [2024-11-29 07:42:35.837857] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.167 [2024-11-29 07:42:35.837866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.167 [2024-11-29 07:42:35.837872] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.167 [2024-11-29 07:42:35.837881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.167 [2024-11-29 07:42:35.837887] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:46.167 [2024-11-29 07:42:35.837895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.167 "name": "Existed_Raid", 00:10:46.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.167 "strip_size_kb": 64, 00:10:46.167 "state": "configuring", 00:10:46.167 "raid_level": "raid0", 00:10:46.167 "superblock": false, 00:10:46.167 "num_base_bdevs": 4, 00:10:46.167 "num_base_bdevs_discovered": 0, 00:10:46.167 "num_base_bdevs_operational": 4, 00:10:46.167 "base_bdevs_list": [ 00:10:46.167 { 00:10:46.167 "name": "BaseBdev1", 00:10:46.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.167 "is_configured": false, 00:10:46.167 "data_offset": 0, 00:10:46.167 "data_size": 0 00:10:46.167 }, 00:10:46.167 { 00:10:46.167 "name": "BaseBdev2", 00:10:46.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.167 "is_configured": false, 00:10:46.167 "data_offset": 0, 00:10:46.167 "data_size": 0 00:10:46.167 }, 00:10:46.167 { 00:10:46.167 "name": "BaseBdev3", 00:10:46.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.167 "is_configured": false, 00:10:46.167 "data_offset": 0, 00:10:46.167 "data_size": 0 00:10:46.167 }, 00:10:46.167 { 00:10:46.167 "name": "BaseBdev4", 00:10:46.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.167 "is_configured": false, 00:10:46.167 "data_offset": 0, 00:10:46.167 "data_size": 0 00:10:46.167 } 00:10:46.167 ] 00:10:46.167 }' 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.167 07:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.428 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.428 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.428 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.429 [2024-11-29 07:42:36.277032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.429 [2024-11-29 07:42:36.277158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.429 [2024-11-29 07:42:36.288999] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.429 [2024-11-29 07:42:36.289088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.429 [2024-11-29 07:42:36.289147] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.429 [2024-11-29 07:42:36.289171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.429 [2024-11-29 07:42:36.289190] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.429 [2024-11-29 07:42:36.289211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.429 [2024-11-29 07:42:36.289229] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:46.429 [2024-11-29 07:42:36.289250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.429 [2024-11-29 07:42:36.336073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.429 BaseBdev1 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.429 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.429 [ 00:10:46.429 { 00:10:46.429 "name": "BaseBdev1", 00:10:46.429 "aliases": [ 00:10:46.429 "cb6e252c-ed0a-4888-917b-969b768abbbe" 00:10:46.429 ], 00:10:46.429 "product_name": "Malloc disk", 00:10:46.429 "block_size": 512, 00:10:46.429 "num_blocks": 65536, 00:10:46.429 "uuid": "cb6e252c-ed0a-4888-917b-969b768abbbe", 00:10:46.429 "assigned_rate_limits": { 00:10:46.429 "rw_ios_per_sec": 0, 00:10:46.429 "rw_mbytes_per_sec": 0, 00:10:46.429 "r_mbytes_per_sec": 0, 00:10:46.429 "w_mbytes_per_sec": 0 00:10:46.429 }, 00:10:46.429 "claimed": true, 00:10:46.429 "claim_type": "exclusive_write", 00:10:46.429 "zoned": false, 00:10:46.429 "supported_io_types": { 00:10:46.429 "read": true, 00:10:46.429 "write": true, 00:10:46.429 "unmap": true, 00:10:46.429 "flush": true, 00:10:46.429 "reset": true, 00:10:46.429 "nvme_admin": false, 00:10:46.429 "nvme_io": false, 00:10:46.429 "nvme_io_md": false, 00:10:46.429 "write_zeroes": true, 00:10:46.429 "zcopy": true, 00:10:46.429 "get_zone_info": false, 00:10:46.429 "zone_management": false, 00:10:46.429 "zone_append": false, 00:10:46.429 "compare": false, 00:10:46.429 "compare_and_write": false, 00:10:46.429 "abort": true, 00:10:46.429 "seek_hole": false, 00:10:46.429 "seek_data": false, 00:10:46.429 "copy": true, 00:10:46.429 "nvme_iov_md": false 00:10:46.429 }, 00:10:46.689 "memory_domains": [ 00:10:46.689 { 00:10:46.689 "dma_device_id": "system", 00:10:46.689 "dma_device_type": 1 00:10:46.689 }, 00:10:46.689 { 00:10:46.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.689 "dma_device_type": 2 00:10:46.689 } 00:10:46.689 ], 00:10:46.689 "driver_specific": {} 00:10:46.689 } 00:10:46.689 ] 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.689 "name": "Existed_Raid", 00:10:46.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.689 "strip_size_kb": 64, 00:10:46.689 "state": "configuring", 00:10:46.689 "raid_level": "raid0", 00:10:46.689 "superblock": false, 00:10:46.689 "num_base_bdevs": 4, 00:10:46.689 "num_base_bdevs_discovered": 1, 00:10:46.689 "num_base_bdevs_operational": 4, 00:10:46.689 "base_bdevs_list": [ 00:10:46.689 { 00:10:46.689 "name": "BaseBdev1", 00:10:46.689 "uuid": "cb6e252c-ed0a-4888-917b-969b768abbbe", 00:10:46.689 "is_configured": true, 00:10:46.689 "data_offset": 0, 00:10:46.689 "data_size": 65536 00:10:46.689 }, 00:10:46.689 { 00:10:46.689 "name": "BaseBdev2", 00:10:46.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.689 "is_configured": false, 00:10:46.689 "data_offset": 0, 00:10:46.689 "data_size": 0 00:10:46.689 }, 00:10:46.689 { 00:10:46.689 "name": "BaseBdev3", 00:10:46.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.689 "is_configured": false, 00:10:46.689 "data_offset": 0, 00:10:46.689 "data_size": 0 00:10:46.689 }, 00:10:46.689 { 00:10:46.689 "name": "BaseBdev4", 00:10:46.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.689 "is_configured": false, 00:10:46.689 "data_offset": 0, 00:10:46.689 "data_size": 0 00:10:46.689 } 00:10:46.689 ] 00:10:46.689 }' 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.689 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.949 [2024-11-29 07:42:36.831316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.949 [2024-11-29 07:42:36.831371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.949 [2024-11-29 07:42:36.843353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.949 [2024-11-29 07:42:36.845150] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.949 [2024-11-29 07:42:36.845192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.949 [2024-11-29 07:42:36.845203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.949 [2024-11-29 07:42:36.845214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.949 [2024-11-29 07:42:36.845220] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:46.949 [2024-11-29 07:42:36.845229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.949 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.950 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.950 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.950 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.950 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.950 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.210 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.210 "name": "Existed_Raid", 00:10:47.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.210 "strip_size_kb": 64, 00:10:47.210 "state": "configuring", 00:10:47.210 "raid_level": "raid0", 00:10:47.210 "superblock": false, 00:10:47.210 "num_base_bdevs": 4, 00:10:47.210 "num_base_bdevs_discovered": 1, 00:10:47.210 "num_base_bdevs_operational": 4, 00:10:47.210 "base_bdevs_list": [ 00:10:47.210 { 00:10:47.210 "name": "BaseBdev1", 00:10:47.210 "uuid": "cb6e252c-ed0a-4888-917b-969b768abbbe", 00:10:47.210 "is_configured": true, 00:10:47.210 "data_offset": 0, 00:10:47.210 "data_size": 65536 00:10:47.210 }, 00:10:47.210 { 00:10:47.210 "name": "BaseBdev2", 00:10:47.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.210 "is_configured": false, 00:10:47.210 "data_offset": 0, 00:10:47.210 "data_size": 0 00:10:47.210 }, 00:10:47.210 { 00:10:47.210 "name": "BaseBdev3", 00:10:47.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.210 "is_configured": false, 00:10:47.210 "data_offset": 0, 00:10:47.210 "data_size": 0 00:10:47.210 }, 00:10:47.210 { 00:10:47.210 "name": "BaseBdev4", 00:10:47.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.210 "is_configured": false, 00:10:47.210 "data_offset": 0, 00:10:47.210 "data_size": 0 00:10:47.210 } 00:10:47.210 ] 00:10:47.210 }' 00:10:47.210 07:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.210 07:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.469 [2024-11-29 07:42:37.279364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.469 BaseBdev2 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.469 [ 00:10:47.469 { 00:10:47.469 "name": "BaseBdev2", 00:10:47.469 "aliases": [ 00:10:47.469 "a18c5c45-2ebc-4dc5-b442-970bd5320d12" 00:10:47.469 ], 00:10:47.469 "product_name": "Malloc disk", 00:10:47.469 "block_size": 512, 00:10:47.469 "num_blocks": 65536, 00:10:47.469 "uuid": "a18c5c45-2ebc-4dc5-b442-970bd5320d12", 00:10:47.469 "assigned_rate_limits": { 00:10:47.469 "rw_ios_per_sec": 0, 00:10:47.469 "rw_mbytes_per_sec": 0, 00:10:47.469 "r_mbytes_per_sec": 0, 00:10:47.469 "w_mbytes_per_sec": 0 00:10:47.469 }, 00:10:47.469 "claimed": true, 00:10:47.469 "claim_type": "exclusive_write", 00:10:47.469 "zoned": false, 00:10:47.469 "supported_io_types": { 00:10:47.469 "read": true, 00:10:47.469 "write": true, 00:10:47.469 "unmap": true, 00:10:47.469 "flush": true, 00:10:47.469 "reset": true, 00:10:47.469 "nvme_admin": false, 00:10:47.469 "nvme_io": false, 00:10:47.469 "nvme_io_md": false, 00:10:47.469 "write_zeroes": true, 00:10:47.469 "zcopy": true, 00:10:47.469 "get_zone_info": false, 00:10:47.469 "zone_management": false, 00:10:47.469 "zone_append": false, 00:10:47.469 "compare": false, 00:10:47.469 "compare_and_write": false, 00:10:47.469 "abort": true, 00:10:47.469 "seek_hole": false, 00:10:47.469 "seek_data": false, 00:10:47.469 "copy": true, 00:10:47.469 "nvme_iov_md": false 00:10:47.469 }, 00:10:47.469 "memory_domains": [ 00:10:47.469 { 00:10:47.469 "dma_device_id": "system", 00:10:47.469 "dma_device_type": 1 00:10:47.469 }, 00:10:47.469 { 00:10:47.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.469 "dma_device_type": 2 00:10:47.469 } 00:10:47.469 ], 00:10:47.469 "driver_specific": {} 00:10:47.469 } 00:10:47.469 ] 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.469 "name": "Existed_Raid", 00:10:47.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.469 "strip_size_kb": 64, 00:10:47.469 "state": "configuring", 00:10:47.469 "raid_level": "raid0", 00:10:47.469 "superblock": false, 00:10:47.469 "num_base_bdevs": 4, 00:10:47.469 "num_base_bdevs_discovered": 2, 00:10:47.469 "num_base_bdevs_operational": 4, 00:10:47.469 "base_bdevs_list": [ 00:10:47.469 { 00:10:47.469 "name": "BaseBdev1", 00:10:47.469 "uuid": "cb6e252c-ed0a-4888-917b-969b768abbbe", 00:10:47.469 "is_configured": true, 00:10:47.469 "data_offset": 0, 00:10:47.469 "data_size": 65536 00:10:47.469 }, 00:10:47.469 { 00:10:47.469 "name": "BaseBdev2", 00:10:47.469 "uuid": "a18c5c45-2ebc-4dc5-b442-970bd5320d12", 00:10:47.469 "is_configured": true, 00:10:47.469 "data_offset": 0, 00:10:47.469 "data_size": 65536 00:10:47.469 }, 00:10:47.469 { 00:10:47.469 "name": "BaseBdev3", 00:10:47.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.469 "is_configured": false, 00:10:47.469 "data_offset": 0, 00:10:47.469 "data_size": 0 00:10:47.469 }, 00:10:47.469 { 00:10:47.469 "name": "BaseBdev4", 00:10:47.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.469 "is_configured": false, 00:10:47.469 "data_offset": 0, 00:10:47.469 "data_size": 0 00:10:47.469 } 00:10:47.469 ] 00:10:47.469 }' 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.469 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.040 [2024-11-29 07:42:37.842666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.040 BaseBdev3 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.040 [ 00:10:48.040 { 00:10:48.040 "name": "BaseBdev3", 00:10:48.040 "aliases": [ 00:10:48.040 "57b9c361-4c8c-4eb6-ab34-75806baa6555" 00:10:48.040 ], 00:10:48.040 "product_name": "Malloc disk", 00:10:48.040 "block_size": 512, 00:10:48.040 "num_blocks": 65536, 00:10:48.040 "uuid": "57b9c361-4c8c-4eb6-ab34-75806baa6555", 00:10:48.040 "assigned_rate_limits": { 00:10:48.040 "rw_ios_per_sec": 0, 00:10:48.040 "rw_mbytes_per_sec": 0, 00:10:48.040 "r_mbytes_per_sec": 0, 00:10:48.040 "w_mbytes_per_sec": 0 00:10:48.040 }, 00:10:48.040 "claimed": true, 00:10:48.040 "claim_type": "exclusive_write", 00:10:48.040 "zoned": false, 00:10:48.040 "supported_io_types": { 00:10:48.040 "read": true, 00:10:48.040 "write": true, 00:10:48.040 "unmap": true, 00:10:48.040 "flush": true, 00:10:48.040 "reset": true, 00:10:48.040 "nvme_admin": false, 00:10:48.040 "nvme_io": false, 00:10:48.040 "nvme_io_md": false, 00:10:48.040 "write_zeroes": true, 00:10:48.040 "zcopy": true, 00:10:48.040 "get_zone_info": false, 00:10:48.040 "zone_management": false, 00:10:48.040 "zone_append": false, 00:10:48.040 "compare": false, 00:10:48.040 "compare_and_write": false, 00:10:48.040 "abort": true, 00:10:48.040 "seek_hole": false, 00:10:48.040 "seek_data": false, 00:10:48.040 "copy": true, 00:10:48.040 "nvme_iov_md": false 00:10:48.040 }, 00:10:48.040 "memory_domains": [ 00:10:48.040 { 00:10:48.040 "dma_device_id": "system", 00:10:48.040 "dma_device_type": 1 00:10:48.040 }, 00:10:48.040 { 00:10:48.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.040 "dma_device_type": 2 00:10:48.040 } 00:10:48.040 ], 00:10:48.040 "driver_specific": {} 00:10:48.040 } 00:10:48.040 ] 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.040 "name": "Existed_Raid", 00:10:48.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.040 "strip_size_kb": 64, 00:10:48.040 "state": "configuring", 00:10:48.040 "raid_level": "raid0", 00:10:48.040 "superblock": false, 00:10:48.040 "num_base_bdevs": 4, 00:10:48.040 "num_base_bdevs_discovered": 3, 00:10:48.040 "num_base_bdevs_operational": 4, 00:10:48.040 "base_bdevs_list": [ 00:10:48.040 { 00:10:48.040 "name": "BaseBdev1", 00:10:48.040 "uuid": "cb6e252c-ed0a-4888-917b-969b768abbbe", 00:10:48.040 "is_configured": true, 00:10:48.040 "data_offset": 0, 00:10:48.040 "data_size": 65536 00:10:48.040 }, 00:10:48.040 { 00:10:48.040 "name": "BaseBdev2", 00:10:48.040 "uuid": "a18c5c45-2ebc-4dc5-b442-970bd5320d12", 00:10:48.040 "is_configured": true, 00:10:48.040 "data_offset": 0, 00:10:48.040 "data_size": 65536 00:10:48.040 }, 00:10:48.040 { 00:10:48.040 "name": "BaseBdev3", 00:10:48.040 "uuid": "57b9c361-4c8c-4eb6-ab34-75806baa6555", 00:10:48.040 "is_configured": true, 00:10:48.040 "data_offset": 0, 00:10:48.040 "data_size": 65536 00:10:48.040 }, 00:10:48.040 { 00:10:48.040 "name": "BaseBdev4", 00:10:48.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.040 "is_configured": false, 00:10:48.040 "data_offset": 0, 00:10:48.040 "data_size": 0 00:10:48.040 } 00:10:48.040 ] 00:10:48.040 }' 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.040 07:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.610 [2024-11-29 07:42:38.343379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:48.610 [2024-11-29 07:42:38.343497] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:48.610 [2024-11-29 07:42:38.343526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:48.610 [2024-11-29 07:42:38.343845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:48.610 [2024-11-29 07:42:38.344056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:48.610 [2024-11-29 07:42:38.344111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:48.610 [2024-11-29 07:42:38.344410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.610 BaseBdev4 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.610 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.611 [ 00:10:48.611 { 00:10:48.611 "name": "BaseBdev4", 00:10:48.611 "aliases": [ 00:10:48.611 "1304acf1-06e2-4f96-b5ff-9aab9524b838" 00:10:48.611 ], 00:10:48.611 "product_name": "Malloc disk", 00:10:48.611 "block_size": 512, 00:10:48.611 "num_blocks": 65536, 00:10:48.611 "uuid": "1304acf1-06e2-4f96-b5ff-9aab9524b838", 00:10:48.611 "assigned_rate_limits": { 00:10:48.611 "rw_ios_per_sec": 0, 00:10:48.611 "rw_mbytes_per_sec": 0, 00:10:48.611 "r_mbytes_per_sec": 0, 00:10:48.611 "w_mbytes_per_sec": 0 00:10:48.611 }, 00:10:48.611 "claimed": true, 00:10:48.611 "claim_type": "exclusive_write", 00:10:48.611 "zoned": false, 00:10:48.611 "supported_io_types": { 00:10:48.611 "read": true, 00:10:48.611 "write": true, 00:10:48.611 "unmap": true, 00:10:48.611 "flush": true, 00:10:48.611 "reset": true, 00:10:48.611 "nvme_admin": false, 00:10:48.611 "nvme_io": false, 00:10:48.611 "nvme_io_md": false, 00:10:48.611 "write_zeroes": true, 00:10:48.611 "zcopy": true, 00:10:48.611 "get_zone_info": false, 00:10:48.611 "zone_management": false, 00:10:48.611 "zone_append": false, 00:10:48.611 "compare": false, 00:10:48.611 "compare_and_write": false, 00:10:48.611 "abort": true, 00:10:48.611 "seek_hole": false, 00:10:48.611 "seek_data": false, 00:10:48.611 "copy": true, 00:10:48.611 "nvme_iov_md": false 00:10:48.611 }, 00:10:48.611 "memory_domains": [ 00:10:48.611 { 00:10:48.611 "dma_device_id": "system", 00:10:48.611 "dma_device_type": 1 00:10:48.611 }, 00:10:48.611 { 00:10:48.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.611 "dma_device_type": 2 00:10:48.611 } 00:10:48.611 ], 00:10:48.611 "driver_specific": {} 00:10:48.611 } 00:10:48.611 ] 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.611 "name": "Existed_Raid", 00:10:48.611 "uuid": "2020c306-41a1-4823-bed2-c1d4dac1a60d", 00:10:48.611 "strip_size_kb": 64, 00:10:48.611 "state": "online", 00:10:48.611 "raid_level": "raid0", 00:10:48.611 "superblock": false, 00:10:48.611 "num_base_bdevs": 4, 00:10:48.611 "num_base_bdevs_discovered": 4, 00:10:48.611 "num_base_bdevs_operational": 4, 00:10:48.611 "base_bdevs_list": [ 00:10:48.611 { 00:10:48.611 "name": "BaseBdev1", 00:10:48.611 "uuid": "cb6e252c-ed0a-4888-917b-969b768abbbe", 00:10:48.611 "is_configured": true, 00:10:48.611 "data_offset": 0, 00:10:48.611 "data_size": 65536 00:10:48.611 }, 00:10:48.611 { 00:10:48.611 "name": "BaseBdev2", 00:10:48.611 "uuid": "a18c5c45-2ebc-4dc5-b442-970bd5320d12", 00:10:48.611 "is_configured": true, 00:10:48.611 "data_offset": 0, 00:10:48.611 "data_size": 65536 00:10:48.611 }, 00:10:48.611 { 00:10:48.611 "name": "BaseBdev3", 00:10:48.611 "uuid": "57b9c361-4c8c-4eb6-ab34-75806baa6555", 00:10:48.611 "is_configured": true, 00:10:48.611 "data_offset": 0, 00:10:48.611 "data_size": 65536 00:10:48.611 }, 00:10:48.611 { 00:10:48.611 "name": "BaseBdev4", 00:10:48.611 "uuid": "1304acf1-06e2-4f96-b5ff-9aab9524b838", 00:10:48.611 "is_configured": true, 00:10:48.611 "data_offset": 0, 00:10:48.611 "data_size": 65536 00:10:48.611 } 00:10:48.611 ] 00:10:48.611 }' 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.611 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.871 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:48.871 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:48.871 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.871 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.871 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.871 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.871 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:48.871 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.871 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.871 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.871 [2024-11-29 07:42:38.790959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.871 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.131 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.131 "name": "Existed_Raid", 00:10:49.131 "aliases": [ 00:10:49.131 "2020c306-41a1-4823-bed2-c1d4dac1a60d" 00:10:49.131 ], 00:10:49.132 "product_name": "Raid Volume", 00:10:49.132 "block_size": 512, 00:10:49.132 "num_blocks": 262144, 00:10:49.132 "uuid": "2020c306-41a1-4823-bed2-c1d4dac1a60d", 00:10:49.132 "assigned_rate_limits": { 00:10:49.132 "rw_ios_per_sec": 0, 00:10:49.132 "rw_mbytes_per_sec": 0, 00:10:49.132 "r_mbytes_per_sec": 0, 00:10:49.132 "w_mbytes_per_sec": 0 00:10:49.132 }, 00:10:49.132 "claimed": false, 00:10:49.132 "zoned": false, 00:10:49.132 "supported_io_types": { 00:10:49.132 "read": true, 00:10:49.132 "write": true, 00:10:49.132 "unmap": true, 00:10:49.132 "flush": true, 00:10:49.132 "reset": true, 00:10:49.132 "nvme_admin": false, 00:10:49.132 "nvme_io": false, 00:10:49.132 "nvme_io_md": false, 00:10:49.132 "write_zeroes": true, 00:10:49.132 "zcopy": false, 00:10:49.132 "get_zone_info": false, 00:10:49.132 "zone_management": false, 00:10:49.132 "zone_append": false, 00:10:49.132 "compare": false, 00:10:49.132 "compare_and_write": false, 00:10:49.132 "abort": false, 00:10:49.132 "seek_hole": false, 00:10:49.132 "seek_data": false, 00:10:49.132 "copy": false, 00:10:49.132 "nvme_iov_md": false 00:10:49.132 }, 00:10:49.132 "memory_domains": [ 00:10:49.132 { 00:10:49.132 "dma_device_id": "system", 00:10:49.132 "dma_device_type": 1 00:10:49.132 }, 00:10:49.132 { 00:10:49.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.132 "dma_device_type": 2 00:10:49.132 }, 00:10:49.132 { 00:10:49.132 "dma_device_id": "system", 00:10:49.132 "dma_device_type": 1 00:10:49.132 }, 00:10:49.132 { 00:10:49.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.132 "dma_device_type": 2 00:10:49.132 }, 00:10:49.132 { 00:10:49.132 "dma_device_id": "system", 00:10:49.132 "dma_device_type": 1 00:10:49.132 }, 00:10:49.132 { 00:10:49.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.132 "dma_device_type": 2 00:10:49.132 }, 00:10:49.132 { 00:10:49.132 "dma_device_id": "system", 00:10:49.132 "dma_device_type": 1 00:10:49.132 }, 00:10:49.132 { 00:10:49.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.132 "dma_device_type": 2 00:10:49.132 } 00:10:49.132 ], 00:10:49.132 "driver_specific": { 00:10:49.132 "raid": { 00:10:49.132 "uuid": "2020c306-41a1-4823-bed2-c1d4dac1a60d", 00:10:49.132 "strip_size_kb": 64, 00:10:49.132 "state": "online", 00:10:49.132 "raid_level": "raid0", 00:10:49.132 "superblock": false, 00:10:49.132 "num_base_bdevs": 4, 00:10:49.132 "num_base_bdevs_discovered": 4, 00:10:49.132 "num_base_bdevs_operational": 4, 00:10:49.132 "base_bdevs_list": [ 00:10:49.132 { 00:10:49.132 "name": "BaseBdev1", 00:10:49.132 "uuid": "cb6e252c-ed0a-4888-917b-969b768abbbe", 00:10:49.132 "is_configured": true, 00:10:49.132 "data_offset": 0, 00:10:49.132 "data_size": 65536 00:10:49.132 }, 00:10:49.132 { 00:10:49.132 "name": "BaseBdev2", 00:10:49.132 "uuid": "a18c5c45-2ebc-4dc5-b442-970bd5320d12", 00:10:49.132 "is_configured": true, 00:10:49.132 "data_offset": 0, 00:10:49.132 "data_size": 65536 00:10:49.132 }, 00:10:49.132 { 00:10:49.132 "name": "BaseBdev3", 00:10:49.132 "uuid": "57b9c361-4c8c-4eb6-ab34-75806baa6555", 00:10:49.132 "is_configured": true, 00:10:49.132 "data_offset": 0, 00:10:49.132 "data_size": 65536 00:10:49.132 }, 00:10:49.132 { 00:10:49.132 "name": "BaseBdev4", 00:10:49.132 "uuid": "1304acf1-06e2-4f96-b5ff-9aab9524b838", 00:10:49.132 "is_configured": true, 00:10:49.132 "data_offset": 0, 00:10:49.132 "data_size": 65536 00:10:49.132 } 00:10:49.132 ] 00:10:49.132 } 00:10:49.132 } 00:10:49.132 }' 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:49.132 BaseBdev2 00:10:49.132 BaseBdev3 00:10:49.132 BaseBdev4' 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.132 07:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.132 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.393 [2024-11-29 07:42:39.118135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.393 [2024-11-29 07:42:39.118201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.393 [2024-11-29 07:42:39.118271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.393 "name": "Existed_Raid", 00:10:49.393 "uuid": "2020c306-41a1-4823-bed2-c1d4dac1a60d", 00:10:49.393 "strip_size_kb": 64, 00:10:49.393 "state": "offline", 00:10:49.393 "raid_level": "raid0", 00:10:49.393 "superblock": false, 00:10:49.393 "num_base_bdevs": 4, 00:10:49.393 "num_base_bdevs_discovered": 3, 00:10:49.393 "num_base_bdevs_operational": 3, 00:10:49.393 "base_bdevs_list": [ 00:10:49.393 { 00:10:49.393 "name": null, 00:10:49.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.393 "is_configured": false, 00:10:49.393 "data_offset": 0, 00:10:49.393 "data_size": 65536 00:10:49.393 }, 00:10:49.393 { 00:10:49.393 "name": "BaseBdev2", 00:10:49.393 "uuid": "a18c5c45-2ebc-4dc5-b442-970bd5320d12", 00:10:49.393 "is_configured": true, 00:10:49.393 "data_offset": 0, 00:10:49.393 "data_size": 65536 00:10:49.393 }, 00:10:49.393 { 00:10:49.393 "name": "BaseBdev3", 00:10:49.393 "uuid": "57b9c361-4c8c-4eb6-ab34-75806baa6555", 00:10:49.393 "is_configured": true, 00:10:49.393 "data_offset": 0, 00:10:49.393 "data_size": 65536 00:10:49.393 }, 00:10:49.393 { 00:10:49.393 "name": "BaseBdev4", 00:10:49.393 "uuid": "1304acf1-06e2-4f96-b5ff-9aab9524b838", 00:10:49.393 "is_configured": true, 00:10:49.393 "data_offset": 0, 00:10:49.393 "data_size": 65536 00:10:49.393 } 00:10:49.393 ] 00:10:49.393 }' 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.393 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.963 [2024-11-29 07:42:39.727806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.963 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.963 [2024-11-29 07:42:39.880725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:50.223 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.223 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.223 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.223 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.223 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.223 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.223 07:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.223 07:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.223 [2024-11-29 07:42:40.032723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:50.223 [2024-11-29 07:42:40.032835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.223 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.483 BaseBdev2 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.483 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.483 [ 00:10:50.483 { 00:10:50.483 "name": "BaseBdev2", 00:10:50.483 "aliases": [ 00:10:50.483 "488bbddd-6483-44c0-8313-d7b4cca5507c" 00:10:50.483 ], 00:10:50.483 "product_name": "Malloc disk", 00:10:50.483 "block_size": 512, 00:10:50.483 "num_blocks": 65536, 00:10:50.483 "uuid": "488bbddd-6483-44c0-8313-d7b4cca5507c", 00:10:50.483 "assigned_rate_limits": { 00:10:50.483 "rw_ios_per_sec": 0, 00:10:50.483 "rw_mbytes_per_sec": 0, 00:10:50.483 "r_mbytes_per_sec": 0, 00:10:50.483 "w_mbytes_per_sec": 0 00:10:50.483 }, 00:10:50.483 "claimed": false, 00:10:50.483 "zoned": false, 00:10:50.483 "supported_io_types": { 00:10:50.483 "read": true, 00:10:50.483 "write": true, 00:10:50.483 "unmap": true, 00:10:50.483 "flush": true, 00:10:50.483 "reset": true, 00:10:50.483 "nvme_admin": false, 00:10:50.483 "nvme_io": false, 00:10:50.483 "nvme_io_md": false, 00:10:50.483 "write_zeroes": true, 00:10:50.483 "zcopy": true, 00:10:50.484 "get_zone_info": false, 00:10:50.484 "zone_management": false, 00:10:50.484 "zone_append": false, 00:10:50.484 "compare": false, 00:10:50.484 "compare_and_write": false, 00:10:50.484 "abort": true, 00:10:50.484 "seek_hole": false, 00:10:50.484 "seek_data": false, 00:10:50.484 "copy": true, 00:10:50.484 "nvme_iov_md": false 00:10:50.484 }, 00:10:50.484 "memory_domains": [ 00:10:50.484 { 00:10:50.484 "dma_device_id": "system", 00:10:50.484 "dma_device_type": 1 00:10:50.484 }, 00:10:50.484 { 00:10:50.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.484 "dma_device_type": 2 00:10:50.484 } 00:10:50.484 ], 00:10:50.484 "driver_specific": {} 00:10:50.484 } 00:10:50.484 ] 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.484 BaseBdev3 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.484 [ 00:10:50.484 { 00:10:50.484 "name": "BaseBdev3", 00:10:50.484 "aliases": [ 00:10:50.484 "4d25da87-4883-499c-8cb6-abaa3347c9cf" 00:10:50.484 ], 00:10:50.484 "product_name": "Malloc disk", 00:10:50.484 "block_size": 512, 00:10:50.484 "num_blocks": 65536, 00:10:50.484 "uuid": "4d25da87-4883-499c-8cb6-abaa3347c9cf", 00:10:50.484 "assigned_rate_limits": { 00:10:50.484 "rw_ios_per_sec": 0, 00:10:50.484 "rw_mbytes_per_sec": 0, 00:10:50.484 "r_mbytes_per_sec": 0, 00:10:50.484 "w_mbytes_per_sec": 0 00:10:50.484 }, 00:10:50.484 "claimed": false, 00:10:50.484 "zoned": false, 00:10:50.484 "supported_io_types": { 00:10:50.484 "read": true, 00:10:50.484 "write": true, 00:10:50.484 "unmap": true, 00:10:50.484 "flush": true, 00:10:50.484 "reset": true, 00:10:50.484 "nvme_admin": false, 00:10:50.484 "nvme_io": false, 00:10:50.484 "nvme_io_md": false, 00:10:50.484 "write_zeroes": true, 00:10:50.484 "zcopy": true, 00:10:50.484 "get_zone_info": false, 00:10:50.484 "zone_management": false, 00:10:50.484 "zone_append": false, 00:10:50.484 "compare": false, 00:10:50.484 "compare_and_write": false, 00:10:50.484 "abort": true, 00:10:50.484 "seek_hole": false, 00:10:50.484 "seek_data": false, 00:10:50.484 "copy": true, 00:10:50.484 "nvme_iov_md": false 00:10:50.484 }, 00:10:50.484 "memory_domains": [ 00:10:50.484 { 00:10:50.484 "dma_device_id": "system", 00:10:50.484 "dma_device_type": 1 00:10:50.484 }, 00:10:50.484 { 00:10:50.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.484 "dma_device_type": 2 00:10:50.484 } 00:10:50.484 ], 00:10:50.484 "driver_specific": {} 00:10:50.484 } 00:10:50.484 ] 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.484 BaseBdev4 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.484 [ 00:10:50.484 { 00:10:50.484 "name": "BaseBdev4", 00:10:50.484 "aliases": [ 00:10:50.484 "1015492a-057a-4782-8e50-66d10bcfc6c1" 00:10:50.484 ], 00:10:50.484 "product_name": "Malloc disk", 00:10:50.484 "block_size": 512, 00:10:50.484 "num_blocks": 65536, 00:10:50.484 "uuid": "1015492a-057a-4782-8e50-66d10bcfc6c1", 00:10:50.484 "assigned_rate_limits": { 00:10:50.484 "rw_ios_per_sec": 0, 00:10:50.484 "rw_mbytes_per_sec": 0, 00:10:50.484 "r_mbytes_per_sec": 0, 00:10:50.484 "w_mbytes_per_sec": 0 00:10:50.484 }, 00:10:50.484 "claimed": false, 00:10:50.484 "zoned": false, 00:10:50.484 "supported_io_types": { 00:10:50.484 "read": true, 00:10:50.484 "write": true, 00:10:50.484 "unmap": true, 00:10:50.484 "flush": true, 00:10:50.484 "reset": true, 00:10:50.484 "nvme_admin": false, 00:10:50.484 "nvme_io": false, 00:10:50.484 "nvme_io_md": false, 00:10:50.484 "write_zeroes": true, 00:10:50.484 "zcopy": true, 00:10:50.484 "get_zone_info": false, 00:10:50.484 "zone_management": false, 00:10:50.484 "zone_append": false, 00:10:50.484 "compare": false, 00:10:50.484 "compare_and_write": false, 00:10:50.484 "abort": true, 00:10:50.484 "seek_hole": false, 00:10:50.484 "seek_data": false, 00:10:50.484 "copy": true, 00:10:50.484 "nvme_iov_md": false 00:10:50.484 }, 00:10:50.484 "memory_domains": [ 00:10:50.484 { 00:10:50.484 "dma_device_id": "system", 00:10:50.484 "dma_device_type": 1 00:10:50.484 }, 00:10:50.484 { 00:10:50.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.484 "dma_device_type": 2 00:10:50.484 } 00:10:50.484 ], 00:10:50.484 "driver_specific": {} 00:10:50.484 } 00:10:50.484 ] 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.484 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.484 [2024-11-29 07:42:40.426627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:50.484 [2024-11-29 07:42:40.426711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:50.484 [2024-11-29 07:42:40.426769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.745 [2024-11-29 07:42:40.428689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.745 [2024-11-29 07:42:40.428804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.745 "name": "Existed_Raid", 00:10:50.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.745 "strip_size_kb": 64, 00:10:50.745 "state": "configuring", 00:10:50.745 "raid_level": "raid0", 00:10:50.745 "superblock": false, 00:10:50.745 "num_base_bdevs": 4, 00:10:50.745 "num_base_bdevs_discovered": 3, 00:10:50.745 "num_base_bdevs_operational": 4, 00:10:50.745 "base_bdevs_list": [ 00:10:50.745 { 00:10:50.745 "name": "BaseBdev1", 00:10:50.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.745 "is_configured": false, 00:10:50.745 "data_offset": 0, 00:10:50.745 "data_size": 0 00:10:50.745 }, 00:10:50.745 { 00:10:50.745 "name": "BaseBdev2", 00:10:50.745 "uuid": "488bbddd-6483-44c0-8313-d7b4cca5507c", 00:10:50.745 "is_configured": true, 00:10:50.745 "data_offset": 0, 00:10:50.745 "data_size": 65536 00:10:50.745 }, 00:10:50.745 { 00:10:50.745 "name": "BaseBdev3", 00:10:50.745 "uuid": "4d25da87-4883-499c-8cb6-abaa3347c9cf", 00:10:50.745 "is_configured": true, 00:10:50.745 "data_offset": 0, 00:10:50.745 "data_size": 65536 00:10:50.745 }, 00:10:50.745 { 00:10:50.745 "name": "BaseBdev4", 00:10:50.745 "uuid": "1015492a-057a-4782-8e50-66d10bcfc6c1", 00:10:50.745 "is_configured": true, 00:10:50.745 "data_offset": 0, 00:10:50.745 "data_size": 65536 00:10:50.745 } 00:10:50.745 ] 00:10:50.745 }' 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.745 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.006 [2024-11-29 07:42:40.897839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.006 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.006 "name": "Existed_Raid", 00:10:51.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.006 "strip_size_kb": 64, 00:10:51.006 "state": "configuring", 00:10:51.006 "raid_level": "raid0", 00:10:51.006 "superblock": false, 00:10:51.006 "num_base_bdevs": 4, 00:10:51.006 "num_base_bdevs_discovered": 2, 00:10:51.007 "num_base_bdevs_operational": 4, 00:10:51.007 "base_bdevs_list": [ 00:10:51.007 { 00:10:51.007 "name": "BaseBdev1", 00:10:51.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.007 "is_configured": false, 00:10:51.007 "data_offset": 0, 00:10:51.007 "data_size": 0 00:10:51.007 }, 00:10:51.007 { 00:10:51.007 "name": null, 00:10:51.007 "uuid": "488bbddd-6483-44c0-8313-d7b4cca5507c", 00:10:51.007 "is_configured": false, 00:10:51.007 "data_offset": 0, 00:10:51.007 "data_size": 65536 00:10:51.007 }, 00:10:51.007 { 00:10:51.007 "name": "BaseBdev3", 00:10:51.007 "uuid": "4d25da87-4883-499c-8cb6-abaa3347c9cf", 00:10:51.007 "is_configured": true, 00:10:51.007 "data_offset": 0, 00:10:51.007 "data_size": 65536 00:10:51.007 }, 00:10:51.007 { 00:10:51.007 "name": "BaseBdev4", 00:10:51.007 "uuid": "1015492a-057a-4782-8e50-66d10bcfc6c1", 00:10:51.007 "is_configured": true, 00:10:51.007 "data_offset": 0, 00:10:51.007 "data_size": 65536 00:10:51.007 } 00:10:51.007 ] 00:10:51.007 }' 00:10:51.007 07:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.007 07:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.577 [2024-11-29 07:42:41.377827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.577 BaseBdev1 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.577 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.577 [ 00:10:51.577 { 00:10:51.577 "name": "BaseBdev1", 00:10:51.577 "aliases": [ 00:10:51.577 "df5312f5-797f-4d5a-b075-9c4c535c10a3" 00:10:51.577 ], 00:10:51.577 "product_name": "Malloc disk", 00:10:51.577 "block_size": 512, 00:10:51.577 "num_blocks": 65536, 00:10:51.577 "uuid": "df5312f5-797f-4d5a-b075-9c4c535c10a3", 00:10:51.577 "assigned_rate_limits": { 00:10:51.577 "rw_ios_per_sec": 0, 00:10:51.577 "rw_mbytes_per_sec": 0, 00:10:51.577 "r_mbytes_per_sec": 0, 00:10:51.577 "w_mbytes_per_sec": 0 00:10:51.577 }, 00:10:51.577 "claimed": true, 00:10:51.577 "claim_type": "exclusive_write", 00:10:51.577 "zoned": false, 00:10:51.577 "supported_io_types": { 00:10:51.577 "read": true, 00:10:51.577 "write": true, 00:10:51.577 "unmap": true, 00:10:51.577 "flush": true, 00:10:51.577 "reset": true, 00:10:51.577 "nvme_admin": false, 00:10:51.577 "nvme_io": false, 00:10:51.577 "nvme_io_md": false, 00:10:51.577 "write_zeroes": true, 00:10:51.577 "zcopy": true, 00:10:51.577 "get_zone_info": false, 00:10:51.577 "zone_management": false, 00:10:51.577 "zone_append": false, 00:10:51.577 "compare": false, 00:10:51.577 "compare_and_write": false, 00:10:51.577 "abort": true, 00:10:51.578 "seek_hole": false, 00:10:51.578 "seek_data": false, 00:10:51.578 "copy": true, 00:10:51.578 "nvme_iov_md": false 00:10:51.578 }, 00:10:51.578 "memory_domains": [ 00:10:51.578 { 00:10:51.578 "dma_device_id": "system", 00:10:51.578 "dma_device_type": 1 00:10:51.578 }, 00:10:51.578 { 00:10:51.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.578 "dma_device_type": 2 00:10:51.578 } 00:10:51.578 ], 00:10:51.578 "driver_specific": {} 00:10:51.578 } 00:10:51.578 ] 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.578 "name": "Existed_Raid", 00:10:51.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.578 "strip_size_kb": 64, 00:10:51.578 "state": "configuring", 00:10:51.578 "raid_level": "raid0", 00:10:51.578 "superblock": false, 00:10:51.578 "num_base_bdevs": 4, 00:10:51.578 "num_base_bdevs_discovered": 3, 00:10:51.578 "num_base_bdevs_operational": 4, 00:10:51.578 "base_bdevs_list": [ 00:10:51.578 { 00:10:51.578 "name": "BaseBdev1", 00:10:51.578 "uuid": "df5312f5-797f-4d5a-b075-9c4c535c10a3", 00:10:51.578 "is_configured": true, 00:10:51.578 "data_offset": 0, 00:10:51.578 "data_size": 65536 00:10:51.578 }, 00:10:51.578 { 00:10:51.578 "name": null, 00:10:51.578 "uuid": "488bbddd-6483-44c0-8313-d7b4cca5507c", 00:10:51.578 "is_configured": false, 00:10:51.578 "data_offset": 0, 00:10:51.578 "data_size": 65536 00:10:51.578 }, 00:10:51.578 { 00:10:51.578 "name": "BaseBdev3", 00:10:51.578 "uuid": "4d25da87-4883-499c-8cb6-abaa3347c9cf", 00:10:51.578 "is_configured": true, 00:10:51.578 "data_offset": 0, 00:10:51.578 "data_size": 65536 00:10:51.578 }, 00:10:51.578 { 00:10:51.578 "name": "BaseBdev4", 00:10:51.578 "uuid": "1015492a-057a-4782-8e50-66d10bcfc6c1", 00:10:51.578 "is_configured": true, 00:10:51.578 "data_offset": 0, 00:10:51.578 "data_size": 65536 00:10:51.578 } 00:10:51.578 ] 00:10:51.578 }' 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.578 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.149 [2024-11-29 07:42:41.901056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.149 "name": "Existed_Raid", 00:10:52.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.149 "strip_size_kb": 64, 00:10:52.149 "state": "configuring", 00:10:52.149 "raid_level": "raid0", 00:10:52.149 "superblock": false, 00:10:52.149 "num_base_bdevs": 4, 00:10:52.149 "num_base_bdevs_discovered": 2, 00:10:52.149 "num_base_bdevs_operational": 4, 00:10:52.149 "base_bdevs_list": [ 00:10:52.149 { 00:10:52.149 "name": "BaseBdev1", 00:10:52.149 "uuid": "df5312f5-797f-4d5a-b075-9c4c535c10a3", 00:10:52.149 "is_configured": true, 00:10:52.149 "data_offset": 0, 00:10:52.149 "data_size": 65536 00:10:52.149 }, 00:10:52.149 { 00:10:52.149 "name": null, 00:10:52.149 "uuid": "488bbddd-6483-44c0-8313-d7b4cca5507c", 00:10:52.149 "is_configured": false, 00:10:52.149 "data_offset": 0, 00:10:52.149 "data_size": 65536 00:10:52.149 }, 00:10:52.149 { 00:10:52.149 "name": null, 00:10:52.149 "uuid": "4d25da87-4883-499c-8cb6-abaa3347c9cf", 00:10:52.149 "is_configured": false, 00:10:52.149 "data_offset": 0, 00:10:52.149 "data_size": 65536 00:10:52.149 }, 00:10:52.149 { 00:10:52.149 "name": "BaseBdev4", 00:10:52.149 "uuid": "1015492a-057a-4782-8e50-66d10bcfc6c1", 00:10:52.149 "is_configured": true, 00:10:52.149 "data_offset": 0, 00:10:52.149 "data_size": 65536 00:10:52.149 } 00:10:52.149 ] 00:10:52.149 }' 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.149 07:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.409 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.409 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.409 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.409 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:52.409 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.668 [2024-11-29 07:42:42.368233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.668 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.668 "name": "Existed_Raid", 00:10:52.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.668 "strip_size_kb": 64, 00:10:52.668 "state": "configuring", 00:10:52.668 "raid_level": "raid0", 00:10:52.669 "superblock": false, 00:10:52.669 "num_base_bdevs": 4, 00:10:52.669 "num_base_bdevs_discovered": 3, 00:10:52.669 "num_base_bdevs_operational": 4, 00:10:52.669 "base_bdevs_list": [ 00:10:52.669 { 00:10:52.669 "name": "BaseBdev1", 00:10:52.669 "uuid": "df5312f5-797f-4d5a-b075-9c4c535c10a3", 00:10:52.669 "is_configured": true, 00:10:52.669 "data_offset": 0, 00:10:52.669 "data_size": 65536 00:10:52.669 }, 00:10:52.669 { 00:10:52.669 "name": null, 00:10:52.669 "uuid": "488bbddd-6483-44c0-8313-d7b4cca5507c", 00:10:52.669 "is_configured": false, 00:10:52.669 "data_offset": 0, 00:10:52.669 "data_size": 65536 00:10:52.669 }, 00:10:52.669 { 00:10:52.669 "name": "BaseBdev3", 00:10:52.669 "uuid": "4d25da87-4883-499c-8cb6-abaa3347c9cf", 00:10:52.669 "is_configured": true, 00:10:52.669 "data_offset": 0, 00:10:52.669 "data_size": 65536 00:10:52.669 }, 00:10:52.669 { 00:10:52.669 "name": "BaseBdev4", 00:10:52.669 "uuid": "1015492a-057a-4782-8e50-66d10bcfc6c1", 00:10:52.669 "is_configured": true, 00:10:52.669 "data_offset": 0, 00:10:52.669 "data_size": 65536 00:10:52.669 } 00:10:52.669 ] 00:10:52.669 }' 00:10:52.669 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.669 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.933 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.933 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.933 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.933 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:52.933 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.933 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:52.933 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:52.933 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.933 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.933 [2024-11-29 07:42:42.855414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.198 07:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.198 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.198 "name": "Existed_Raid", 00:10:53.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.198 "strip_size_kb": 64, 00:10:53.198 "state": "configuring", 00:10:53.198 "raid_level": "raid0", 00:10:53.198 "superblock": false, 00:10:53.198 "num_base_bdevs": 4, 00:10:53.198 "num_base_bdevs_discovered": 2, 00:10:53.198 "num_base_bdevs_operational": 4, 00:10:53.198 "base_bdevs_list": [ 00:10:53.198 { 00:10:53.198 "name": null, 00:10:53.198 "uuid": "df5312f5-797f-4d5a-b075-9c4c535c10a3", 00:10:53.198 "is_configured": false, 00:10:53.198 "data_offset": 0, 00:10:53.198 "data_size": 65536 00:10:53.198 }, 00:10:53.198 { 00:10:53.198 "name": null, 00:10:53.198 "uuid": "488bbddd-6483-44c0-8313-d7b4cca5507c", 00:10:53.198 "is_configured": false, 00:10:53.198 "data_offset": 0, 00:10:53.198 "data_size": 65536 00:10:53.198 }, 00:10:53.198 { 00:10:53.198 "name": "BaseBdev3", 00:10:53.198 "uuid": "4d25da87-4883-499c-8cb6-abaa3347c9cf", 00:10:53.198 "is_configured": true, 00:10:53.198 "data_offset": 0, 00:10:53.198 "data_size": 65536 00:10:53.198 }, 00:10:53.198 { 00:10:53.198 "name": "BaseBdev4", 00:10:53.198 "uuid": "1015492a-057a-4782-8e50-66d10bcfc6c1", 00:10:53.198 "is_configured": true, 00:10:53.198 "data_offset": 0, 00:10:53.198 "data_size": 65536 00:10:53.198 } 00:10:53.198 ] 00:10:53.198 }' 00:10:53.198 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.198 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.456 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.456 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:53.456 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.456 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.714 [2024-11-29 07:42:43.448057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.714 "name": "Existed_Raid", 00:10:53.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.714 "strip_size_kb": 64, 00:10:53.714 "state": "configuring", 00:10:53.714 "raid_level": "raid0", 00:10:53.714 "superblock": false, 00:10:53.714 "num_base_bdevs": 4, 00:10:53.714 "num_base_bdevs_discovered": 3, 00:10:53.714 "num_base_bdevs_operational": 4, 00:10:53.714 "base_bdevs_list": [ 00:10:53.714 { 00:10:53.714 "name": null, 00:10:53.714 "uuid": "df5312f5-797f-4d5a-b075-9c4c535c10a3", 00:10:53.714 "is_configured": false, 00:10:53.714 "data_offset": 0, 00:10:53.714 "data_size": 65536 00:10:53.714 }, 00:10:53.714 { 00:10:53.714 "name": "BaseBdev2", 00:10:53.714 "uuid": "488bbddd-6483-44c0-8313-d7b4cca5507c", 00:10:53.714 "is_configured": true, 00:10:53.714 "data_offset": 0, 00:10:53.714 "data_size": 65536 00:10:53.714 }, 00:10:53.714 { 00:10:53.714 "name": "BaseBdev3", 00:10:53.714 "uuid": "4d25da87-4883-499c-8cb6-abaa3347c9cf", 00:10:53.714 "is_configured": true, 00:10:53.714 "data_offset": 0, 00:10:53.714 "data_size": 65536 00:10:53.714 }, 00:10:53.714 { 00:10:53.714 "name": "BaseBdev4", 00:10:53.714 "uuid": "1015492a-057a-4782-8e50-66d10bcfc6c1", 00:10:53.714 "is_configured": true, 00:10:53.714 "data_offset": 0, 00:10:53.714 "data_size": 65536 00:10:53.714 } 00:10:53.714 ] 00:10:53.714 }' 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.714 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.971 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.971 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:53.971 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.971 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.231 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.231 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:54.231 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:54.231 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.231 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.231 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.231 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.231 07:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u df5312f5-797f-4d5a-b075-9c4c535c10a3 00:10:54.231 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.231 07:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.231 [2024-11-29 07:42:44.015579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:54.231 [2024-11-29 07:42:44.015734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:54.231 [2024-11-29 07:42:44.015763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:54.231 [2024-11-29 07:42:44.016058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:54.231 [2024-11-29 07:42:44.016282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:54.231 [2024-11-29 07:42:44.016329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:54.231 [2024-11-29 07:42:44.016619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.231 NewBaseBdev 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.231 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.231 [ 00:10:54.231 { 00:10:54.231 "name": "NewBaseBdev", 00:10:54.231 "aliases": [ 00:10:54.231 "df5312f5-797f-4d5a-b075-9c4c535c10a3" 00:10:54.231 ], 00:10:54.231 "product_name": "Malloc disk", 00:10:54.231 "block_size": 512, 00:10:54.231 "num_blocks": 65536, 00:10:54.231 "uuid": "df5312f5-797f-4d5a-b075-9c4c535c10a3", 00:10:54.231 "assigned_rate_limits": { 00:10:54.231 "rw_ios_per_sec": 0, 00:10:54.231 "rw_mbytes_per_sec": 0, 00:10:54.231 "r_mbytes_per_sec": 0, 00:10:54.231 "w_mbytes_per_sec": 0 00:10:54.231 }, 00:10:54.231 "claimed": true, 00:10:54.231 "claim_type": "exclusive_write", 00:10:54.232 "zoned": false, 00:10:54.232 "supported_io_types": { 00:10:54.232 "read": true, 00:10:54.232 "write": true, 00:10:54.232 "unmap": true, 00:10:54.232 "flush": true, 00:10:54.232 "reset": true, 00:10:54.232 "nvme_admin": false, 00:10:54.232 "nvme_io": false, 00:10:54.232 "nvme_io_md": false, 00:10:54.232 "write_zeroes": true, 00:10:54.232 "zcopy": true, 00:10:54.232 "get_zone_info": false, 00:10:54.232 "zone_management": false, 00:10:54.232 "zone_append": false, 00:10:54.232 "compare": false, 00:10:54.232 "compare_and_write": false, 00:10:54.232 "abort": true, 00:10:54.232 "seek_hole": false, 00:10:54.232 "seek_data": false, 00:10:54.232 "copy": true, 00:10:54.232 "nvme_iov_md": false 00:10:54.232 }, 00:10:54.232 "memory_domains": [ 00:10:54.232 { 00:10:54.232 "dma_device_id": "system", 00:10:54.232 "dma_device_type": 1 00:10:54.232 }, 00:10:54.232 { 00:10:54.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.232 "dma_device_type": 2 00:10:54.232 } 00:10:54.232 ], 00:10:54.232 "driver_specific": {} 00:10:54.232 } 00:10:54.232 ] 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.232 "name": "Existed_Raid", 00:10:54.232 "uuid": "c9746e0a-8cc9-4908-aa9b-341a4e785a5c", 00:10:54.232 "strip_size_kb": 64, 00:10:54.232 "state": "online", 00:10:54.232 "raid_level": "raid0", 00:10:54.232 "superblock": false, 00:10:54.232 "num_base_bdevs": 4, 00:10:54.232 "num_base_bdevs_discovered": 4, 00:10:54.232 "num_base_bdevs_operational": 4, 00:10:54.232 "base_bdevs_list": [ 00:10:54.232 { 00:10:54.232 "name": "NewBaseBdev", 00:10:54.232 "uuid": "df5312f5-797f-4d5a-b075-9c4c535c10a3", 00:10:54.232 "is_configured": true, 00:10:54.232 "data_offset": 0, 00:10:54.232 "data_size": 65536 00:10:54.232 }, 00:10:54.232 { 00:10:54.232 "name": "BaseBdev2", 00:10:54.232 "uuid": "488bbddd-6483-44c0-8313-d7b4cca5507c", 00:10:54.232 "is_configured": true, 00:10:54.232 "data_offset": 0, 00:10:54.232 "data_size": 65536 00:10:54.232 }, 00:10:54.232 { 00:10:54.232 "name": "BaseBdev3", 00:10:54.232 "uuid": "4d25da87-4883-499c-8cb6-abaa3347c9cf", 00:10:54.232 "is_configured": true, 00:10:54.232 "data_offset": 0, 00:10:54.232 "data_size": 65536 00:10:54.232 }, 00:10:54.232 { 00:10:54.232 "name": "BaseBdev4", 00:10:54.232 "uuid": "1015492a-057a-4782-8e50-66d10bcfc6c1", 00:10:54.232 "is_configured": true, 00:10:54.232 "data_offset": 0, 00:10:54.232 "data_size": 65536 00:10:54.232 } 00:10:54.232 ] 00:10:54.232 }' 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.232 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.801 [2024-11-29 07:42:44.491181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.801 "name": "Existed_Raid", 00:10:54.801 "aliases": [ 00:10:54.801 "c9746e0a-8cc9-4908-aa9b-341a4e785a5c" 00:10:54.801 ], 00:10:54.801 "product_name": "Raid Volume", 00:10:54.801 "block_size": 512, 00:10:54.801 "num_blocks": 262144, 00:10:54.801 "uuid": "c9746e0a-8cc9-4908-aa9b-341a4e785a5c", 00:10:54.801 "assigned_rate_limits": { 00:10:54.801 "rw_ios_per_sec": 0, 00:10:54.801 "rw_mbytes_per_sec": 0, 00:10:54.801 "r_mbytes_per_sec": 0, 00:10:54.801 "w_mbytes_per_sec": 0 00:10:54.801 }, 00:10:54.801 "claimed": false, 00:10:54.801 "zoned": false, 00:10:54.801 "supported_io_types": { 00:10:54.801 "read": true, 00:10:54.801 "write": true, 00:10:54.801 "unmap": true, 00:10:54.801 "flush": true, 00:10:54.801 "reset": true, 00:10:54.801 "nvme_admin": false, 00:10:54.801 "nvme_io": false, 00:10:54.801 "nvme_io_md": false, 00:10:54.801 "write_zeroes": true, 00:10:54.801 "zcopy": false, 00:10:54.801 "get_zone_info": false, 00:10:54.801 "zone_management": false, 00:10:54.801 "zone_append": false, 00:10:54.801 "compare": false, 00:10:54.801 "compare_and_write": false, 00:10:54.801 "abort": false, 00:10:54.801 "seek_hole": false, 00:10:54.801 "seek_data": false, 00:10:54.801 "copy": false, 00:10:54.801 "nvme_iov_md": false 00:10:54.801 }, 00:10:54.801 "memory_domains": [ 00:10:54.801 { 00:10:54.801 "dma_device_id": "system", 00:10:54.801 "dma_device_type": 1 00:10:54.801 }, 00:10:54.801 { 00:10:54.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.801 "dma_device_type": 2 00:10:54.801 }, 00:10:54.801 { 00:10:54.801 "dma_device_id": "system", 00:10:54.801 "dma_device_type": 1 00:10:54.801 }, 00:10:54.801 { 00:10:54.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.801 "dma_device_type": 2 00:10:54.801 }, 00:10:54.801 { 00:10:54.801 "dma_device_id": "system", 00:10:54.801 "dma_device_type": 1 00:10:54.801 }, 00:10:54.801 { 00:10:54.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.801 "dma_device_type": 2 00:10:54.801 }, 00:10:54.801 { 00:10:54.801 "dma_device_id": "system", 00:10:54.801 "dma_device_type": 1 00:10:54.801 }, 00:10:54.801 { 00:10:54.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.801 "dma_device_type": 2 00:10:54.801 } 00:10:54.801 ], 00:10:54.801 "driver_specific": { 00:10:54.801 "raid": { 00:10:54.801 "uuid": "c9746e0a-8cc9-4908-aa9b-341a4e785a5c", 00:10:54.801 "strip_size_kb": 64, 00:10:54.801 "state": "online", 00:10:54.801 "raid_level": "raid0", 00:10:54.801 "superblock": false, 00:10:54.801 "num_base_bdevs": 4, 00:10:54.801 "num_base_bdevs_discovered": 4, 00:10:54.801 "num_base_bdevs_operational": 4, 00:10:54.801 "base_bdevs_list": [ 00:10:54.801 { 00:10:54.801 "name": "NewBaseBdev", 00:10:54.801 "uuid": "df5312f5-797f-4d5a-b075-9c4c535c10a3", 00:10:54.801 "is_configured": true, 00:10:54.801 "data_offset": 0, 00:10:54.801 "data_size": 65536 00:10:54.801 }, 00:10:54.801 { 00:10:54.801 "name": "BaseBdev2", 00:10:54.801 "uuid": "488bbddd-6483-44c0-8313-d7b4cca5507c", 00:10:54.801 "is_configured": true, 00:10:54.801 "data_offset": 0, 00:10:54.801 "data_size": 65536 00:10:54.801 }, 00:10:54.801 { 00:10:54.801 "name": "BaseBdev3", 00:10:54.801 "uuid": "4d25da87-4883-499c-8cb6-abaa3347c9cf", 00:10:54.801 "is_configured": true, 00:10:54.801 "data_offset": 0, 00:10:54.801 "data_size": 65536 00:10:54.801 }, 00:10:54.801 { 00:10:54.801 "name": "BaseBdev4", 00:10:54.801 "uuid": "1015492a-057a-4782-8e50-66d10bcfc6c1", 00:10:54.801 "is_configured": true, 00:10:54.801 "data_offset": 0, 00:10:54.801 "data_size": 65536 00:10:54.801 } 00:10:54.801 ] 00:10:54.801 } 00:10:54.801 } 00:10:54.801 }' 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.801 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:54.801 BaseBdev2 00:10:54.801 BaseBdev3 00:10:54.801 BaseBdev4' 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.802 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.061 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.062 [2024-11-29 07:42:44.834208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.062 [2024-11-29 07:42:44.834279] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.062 [2024-11-29 07:42:44.834393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.062 [2024-11-29 07:42:44.834504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.062 [2024-11-29 07:42:44.834553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69156 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69156 ']' 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69156 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69156 00:10:55.062 killing process with pid 69156 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69156' 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69156 00:10:55.062 [2024-11-29 07:42:44.882382] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.062 07:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69156 00:10:55.630 [2024-11-29 07:42:45.270920] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:56.568 ************************************ 00:10:56.568 END TEST raid_state_function_test 00:10:56.568 ************************************ 00:10:56.568 00:10:56.568 real 0m11.451s 00:10:56.568 user 0m18.217s 00:10:56.568 sys 0m1.991s 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.568 07:42:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:56.568 07:42:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:56.568 07:42:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.568 07:42:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.568 ************************************ 00:10:56.568 START TEST raid_state_function_test_sb 00:10:56.568 ************************************ 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69828 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69828' 00:10:56.568 Process raid pid: 69828 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69828 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69828 ']' 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.568 07:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.828 [2024-11-29 07:42:46.534893] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:56.828 [2024-11-29 07:42:46.535106] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.828 [2024-11-29 07:42:46.706471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.087 [2024-11-29 07:42:46.819938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.087 [2024-11-29 07:42:47.019768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.087 [2024-11-29 07:42:47.019907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.658 [2024-11-29 07:42:47.359790] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.658 [2024-11-29 07:42:47.359894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.658 [2024-11-29 07:42:47.359926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.658 [2024-11-29 07:42:47.359950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.658 [2024-11-29 07:42:47.359970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.658 [2024-11-29 07:42:47.359992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.658 [2024-11-29 07:42:47.360010] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:57.658 [2024-11-29 07:42:47.360031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.658 "name": "Existed_Raid", 00:10:57.658 "uuid": "9637c38a-dfdf-4e56-9c73-1982ed001967", 00:10:57.658 "strip_size_kb": 64, 00:10:57.658 "state": "configuring", 00:10:57.658 "raid_level": "raid0", 00:10:57.658 "superblock": true, 00:10:57.658 "num_base_bdevs": 4, 00:10:57.658 "num_base_bdevs_discovered": 0, 00:10:57.658 "num_base_bdevs_operational": 4, 00:10:57.658 "base_bdevs_list": [ 00:10:57.658 { 00:10:57.658 "name": "BaseBdev1", 00:10:57.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.658 "is_configured": false, 00:10:57.658 "data_offset": 0, 00:10:57.658 "data_size": 0 00:10:57.658 }, 00:10:57.658 { 00:10:57.658 "name": "BaseBdev2", 00:10:57.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.658 "is_configured": false, 00:10:57.658 "data_offset": 0, 00:10:57.658 "data_size": 0 00:10:57.658 }, 00:10:57.658 { 00:10:57.658 "name": "BaseBdev3", 00:10:57.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.658 "is_configured": false, 00:10:57.658 "data_offset": 0, 00:10:57.658 "data_size": 0 00:10:57.658 }, 00:10:57.658 { 00:10:57.658 "name": "BaseBdev4", 00:10:57.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.658 "is_configured": false, 00:10:57.658 "data_offset": 0, 00:10:57.658 "data_size": 0 00:10:57.658 } 00:10:57.658 ] 00:10:57.658 }' 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.658 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.918 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.918 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.918 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.918 [2024-11-29 07:42:47.798954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.918 [2024-11-29 07:42:47.799070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:57.918 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.918 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.918 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.918 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.918 [2024-11-29 07:42:47.810942] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.918 [2024-11-29 07:42:47.811020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.918 [2024-11-29 07:42:47.811050] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.918 [2024-11-29 07:42:47.811088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.918 [2024-11-29 07:42:47.811154] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.918 [2024-11-29 07:42:47.811178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.918 [2024-11-29 07:42:47.811205] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:57.918 [2024-11-29 07:42:47.811231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:57.918 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.918 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:57.918 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.918 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.918 [2024-11-29 07:42:47.858516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.178 BaseBdev1 00:10:58.178 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.178 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:58.178 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:58.178 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.178 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:58.178 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.178 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.178 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.178 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.178 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.178 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.179 [ 00:10:58.179 { 00:10:58.179 "name": "BaseBdev1", 00:10:58.179 "aliases": [ 00:10:58.179 "daab9d10-8241-4412-9157-66860f44591d" 00:10:58.179 ], 00:10:58.179 "product_name": "Malloc disk", 00:10:58.179 "block_size": 512, 00:10:58.179 "num_blocks": 65536, 00:10:58.179 "uuid": "daab9d10-8241-4412-9157-66860f44591d", 00:10:58.179 "assigned_rate_limits": { 00:10:58.179 "rw_ios_per_sec": 0, 00:10:58.179 "rw_mbytes_per_sec": 0, 00:10:58.179 "r_mbytes_per_sec": 0, 00:10:58.179 "w_mbytes_per_sec": 0 00:10:58.179 }, 00:10:58.179 "claimed": true, 00:10:58.179 "claim_type": "exclusive_write", 00:10:58.179 "zoned": false, 00:10:58.179 "supported_io_types": { 00:10:58.179 "read": true, 00:10:58.179 "write": true, 00:10:58.179 "unmap": true, 00:10:58.179 "flush": true, 00:10:58.179 "reset": true, 00:10:58.179 "nvme_admin": false, 00:10:58.179 "nvme_io": false, 00:10:58.179 "nvme_io_md": false, 00:10:58.179 "write_zeroes": true, 00:10:58.179 "zcopy": true, 00:10:58.179 "get_zone_info": false, 00:10:58.179 "zone_management": false, 00:10:58.179 "zone_append": false, 00:10:58.179 "compare": false, 00:10:58.179 "compare_and_write": false, 00:10:58.179 "abort": true, 00:10:58.179 "seek_hole": false, 00:10:58.179 "seek_data": false, 00:10:58.179 "copy": true, 00:10:58.179 "nvme_iov_md": false 00:10:58.179 }, 00:10:58.179 "memory_domains": [ 00:10:58.179 { 00:10:58.179 "dma_device_id": "system", 00:10:58.179 "dma_device_type": 1 00:10:58.179 }, 00:10:58.179 { 00:10:58.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.179 "dma_device_type": 2 00:10:58.179 } 00:10:58.179 ], 00:10:58.179 "driver_specific": {} 00:10:58.179 } 00:10:58.179 ] 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.179 "name": "Existed_Raid", 00:10:58.179 "uuid": "2e82145f-bb52-46b8-81ab-42f789301db4", 00:10:58.179 "strip_size_kb": 64, 00:10:58.179 "state": "configuring", 00:10:58.179 "raid_level": "raid0", 00:10:58.179 "superblock": true, 00:10:58.179 "num_base_bdevs": 4, 00:10:58.179 "num_base_bdevs_discovered": 1, 00:10:58.179 "num_base_bdevs_operational": 4, 00:10:58.179 "base_bdevs_list": [ 00:10:58.179 { 00:10:58.179 "name": "BaseBdev1", 00:10:58.179 "uuid": "daab9d10-8241-4412-9157-66860f44591d", 00:10:58.179 "is_configured": true, 00:10:58.179 "data_offset": 2048, 00:10:58.179 "data_size": 63488 00:10:58.179 }, 00:10:58.179 { 00:10:58.179 "name": "BaseBdev2", 00:10:58.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.179 "is_configured": false, 00:10:58.179 "data_offset": 0, 00:10:58.179 "data_size": 0 00:10:58.179 }, 00:10:58.179 { 00:10:58.179 "name": "BaseBdev3", 00:10:58.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.179 "is_configured": false, 00:10:58.179 "data_offset": 0, 00:10:58.179 "data_size": 0 00:10:58.179 }, 00:10:58.179 { 00:10:58.179 "name": "BaseBdev4", 00:10:58.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.179 "is_configured": false, 00:10:58.179 "data_offset": 0, 00:10:58.179 "data_size": 0 00:10:58.179 } 00:10:58.179 ] 00:10:58.179 }' 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.179 07:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.438 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:58.438 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.438 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.438 [2024-11-29 07:42:48.373678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:58.438 [2024-11-29 07:42:48.373732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:58.438 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.438 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:58.438 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.438 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.438 [2024-11-29 07:42:48.381738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.698 [2024-11-29 07:42:48.383818] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:58.698 [2024-11-29 07:42:48.383864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:58.698 [2024-11-29 07:42:48.383876] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:58.698 [2024-11-29 07:42:48.383888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:58.698 [2024-11-29 07:42:48.383896] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:58.698 [2024-11-29 07:42:48.383905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.698 "name": "Existed_Raid", 00:10:58.698 "uuid": "176f1ec3-c413-4e81-ae5d-d51ad1db0dcd", 00:10:58.698 "strip_size_kb": 64, 00:10:58.698 "state": "configuring", 00:10:58.698 "raid_level": "raid0", 00:10:58.698 "superblock": true, 00:10:58.698 "num_base_bdevs": 4, 00:10:58.698 "num_base_bdevs_discovered": 1, 00:10:58.698 "num_base_bdevs_operational": 4, 00:10:58.698 "base_bdevs_list": [ 00:10:58.698 { 00:10:58.698 "name": "BaseBdev1", 00:10:58.698 "uuid": "daab9d10-8241-4412-9157-66860f44591d", 00:10:58.698 "is_configured": true, 00:10:58.698 "data_offset": 2048, 00:10:58.698 "data_size": 63488 00:10:58.698 }, 00:10:58.698 { 00:10:58.698 "name": "BaseBdev2", 00:10:58.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.698 "is_configured": false, 00:10:58.698 "data_offset": 0, 00:10:58.698 "data_size": 0 00:10:58.698 }, 00:10:58.698 { 00:10:58.698 "name": "BaseBdev3", 00:10:58.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.698 "is_configured": false, 00:10:58.698 "data_offset": 0, 00:10:58.698 "data_size": 0 00:10:58.698 }, 00:10:58.698 { 00:10:58.698 "name": "BaseBdev4", 00:10:58.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.698 "is_configured": false, 00:10:58.698 "data_offset": 0, 00:10:58.698 "data_size": 0 00:10:58.698 } 00:10:58.698 ] 00:10:58.698 }' 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.698 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.958 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:58.958 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.958 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.218 [2024-11-29 07:42:48.917768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.218 BaseBdev2 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.218 [ 00:10:59.218 { 00:10:59.218 "name": "BaseBdev2", 00:10:59.218 "aliases": [ 00:10:59.218 "2fc587de-c0ae-4ea6-9c7a-bfd70e65ee83" 00:10:59.218 ], 00:10:59.218 "product_name": "Malloc disk", 00:10:59.218 "block_size": 512, 00:10:59.218 "num_blocks": 65536, 00:10:59.218 "uuid": "2fc587de-c0ae-4ea6-9c7a-bfd70e65ee83", 00:10:59.218 "assigned_rate_limits": { 00:10:59.218 "rw_ios_per_sec": 0, 00:10:59.218 "rw_mbytes_per_sec": 0, 00:10:59.218 "r_mbytes_per_sec": 0, 00:10:59.218 "w_mbytes_per_sec": 0 00:10:59.218 }, 00:10:59.218 "claimed": true, 00:10:59.218 "claim_type": "exclusive_write", 00:10:59.218 "zoned": false, 00:10:59.218 "supported_io_types": { 00:10:59.218 "read": true, 00:10:59.218 "write": true, 00:10:59.218 "unmap": true, 00:10:59.218 "flush": true, 00:10:59.218 "reset": true, 00:10:59.218 "nvme_admin": false, 00:10:59.218 "nvme_io": false, 00:10:59.218 "nvme_io_md": false, 00:10:59.218 "write_zeroes": true, 00:10:59.218 "zcopy": true, 00:10:59.218 "get_zone_info": false, 00:10:59.218 "zone_management": false, 00:10:59.218 "zone_append": false, 00:10:59.218 "compare": false, 00:10:59.218 "compare_and_write": false, 00:10:59.218 "abort": true, 00:10:59.218 "seek_hole": false, 00:10:59.218 "seek_data": false, 00:10:59.218 "copy": true, 00:10:59.218 "nvme_iov_md": false 00:10:59.218 }, 00:10:59.218 "memory_domains": [ 00:10:59.218 { 00:10:59.218 "dma_device_id": "system", 00:10:59.218 "dma_device_type": 1 00:10:59.218 }, 00:10:59.218 { 00:10:59.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.218 "dma_device_type": 2 00:10:59.218 } 00:10:59.218 ], 00:10:59.218 "driver_specific": {} 00:10:59.218 } 00:10:59.218 ] 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.218 07:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.218 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.218 "name": "Existed_Raid", 00:10:59.218 "uuid": "176f1ec3-c413-4e81-ae5d-d51ad1db0dcd", 00:10:59.218 "strip_size_kb": 64, 00:10:59.218 "state": "configuring", 00:10:59.218 "raid_level": "raid0", 00:10:59.218 "superblock": true, 00:10:59.218 "num_base_bdevs": 4, 00:10:59.218 "num_base_bdevs_discovered": 2, 00:10:59.218 "num_base_bdevs_operational": 4, 00:10:59.218 "base_bdevs_list": [ 00:10:59.218 { 00:10:59.218 "name": "BaseBdev1", 00:10:59.218 "uuid": "daab9d10-8241-4412-9157-66860f44591d", 00:10:59.218 "is_configured": true, 00:10:59.218 "data_offset": 2048, 00:10:59.218 "data_size": 63488 00:10:59.218 }, 00:10:59.218 { 00:10:59.218 "name": "BaseBdev2", 00:10:59.218 "uuid": "2fc587de-c0ae-4ea6-9c7a-bfd70e65ee83", 00:10:59.218 "is_configured": true, 00:10:59.218 "data_offset": 2048, 00:10:59.218 "data_size": 63488 00:10:59.218 }, 00:10:59.218 { 00:10:59.218 "name": "BaseBdev3", 00:10:59.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.218 "is_configured": false, 00:10:59.218 "data_offset": 0, 00:10:59.218 "data_size": 0 00:10:59.218 }, 00:10:59.218 { 00:10:59.218 "name": "BaseBdev4", 00:10:59.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.218 "is_configured": false, 00:10:59.218 "data_offset": 0, 00:10:59.218 "data_size": 0 00:10:59.218 } 00:10:59.218 ] 00:10:59.218 }' 00:10:59.218 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.218 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.478 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:59.478 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.479 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.738 [2024-11-29 07:42:49.439845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.738 BaseBdev3 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.738 [ 00:10:59.738 { 00:10:59.738 "name": "BaseBdev3", 00:10:59.738 "aliases": [ 00:10:59.738 "250e98b7-c3dc-4e54-8658-80f07fa3f904" 00:10:59.738 ], 00:10:59.738 "product_name": "Malloc disk", 00:10:59.738 "block_size": 512, 00:10:59.738 "num_blocks": 65536, 00:10:59.738 "uuid": "250e98b7-c3dc-4e54-8658-80f07fa3f904", 00:10:59.738 "assigned_rate_limits": { 00:10:59.738 "rw_ios_per_sec": 0, 00:10:59.738 "rw_mbytes_per_sec": 0, 00:10:59.738 "r_mbytes_per_sec": 0, 00:10:59.738 "w_mbytes_per_sec": 0 00:10:59.738 }, 00:10:59.738 "claimed": true, 00:10:59.738 "claim_type": "exclusive_write", 00:10:59.738 "zoned": false, 00:10:59.738 "supported_io_types": { 00:10:59.738 "read": true, 00:10:59.738 "write": true, 00:10:59.738 "unmap": true, 00:10:59.738 "flush": true, 00:10:59.738 "reset": true, 00:10:59.738 "nvme_admin": false, 00:10:59.738 "nvme_io": false, 00:10:59.738 "nvme_io_md": false, 00:10:59.738 "write_zeroes": true, 00:10:59.738 "zcopy": true, 00:10:59.738 "get_zone_info": false, 00:10:59.738 "zone_management": false, 00:10:59.738 "zone_append": false, 00:10:59.738 "compare": false, 00:10:59.738 "compare_and_write": false, 00:10:59.738 "abort": true, 00:10:59.738 "seek_hole": false, 00:10:59.738 "seek_data": false, 00:10:59.738 "copy": true, 00:10:59.738 "nvme_iov_md": false 00:10:59.738 }, 00:10:59.738 "memory_domains": [ 00:10:59.738 { 00:10:59.738 "dma_device_id": "system", 00:10:59.738 "dma_device_type": 1 00:10:59.738 }, 00:10:59.738 { 00:10:59.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.738 "dma_device_type": 2 00:10:59.738 } 00:10:59.738 ], 00:10:59.738 "driver_specific": {} 00:10:59.738 } 00:10:59.738 ] 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.738 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.738 "name": "Existed_Raid", 00:10:59.738 "uuid": "176f1ec3-c413-4e81-ae5d-d51ad1db0dcd", 00:10:59.738 "strip_size_kb": 64, 00:10:59.738 "state": "configuring", 00:10:59.738 "raid_level": "raid0", 00:10:59.738 "superblock": true, 00:10:59.738 "num_base_bdevs": 4, 00:10:59.738 "num_base_bdevs_discovered": 3, 00:10:59.738 "num_base_bdevs_operational": 4, 00:10:59.738 "base_bdevs_list": [ 00:10:59.738 { 00:10:59.738 "name": "BaseBdev1", 00:10:59.738 "uuid": "daab9d10-8241-4412-9157-66860f44591d", 00:10:59.738 "is_configured": true, 00:10:59.738 "data_offset": 2048, 00:10:59.738 "data_size": 63488 00:10:59.738 }, 00:10:59.738 { 00:10:59.738 "name": "BaseBdev2", 00:10:59.739 "uuid": "2fc587de-c0ae-4ea6-9c7a-bfd70e65ee83", 00:10:59.739 "is_configured": true, 00:10:59.739 "data_offset": 2048, 00:10:59.739 "data_size": 63488 00:10:59.739 }, 00:10:59.739 { 00:10:59.739 "name": "BaseBdev3", 00:10:59.739 "uuid": "250e98b7-c3dc-4e54-8658-80f07fa3f904", 00:10:59.739 "is_configured": true, 00:10:59.739 "data_offset": 2048, 00:10:59.739 "data_size": 63488 00:10:59.739 }, 00:10:59.739 { 00:10:59.739 "name": "BaseBdev4", 00:10:59.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.739 "is_configured": false, 00:10:59.739 "data_offset": 0, 00:10:59.739 "data_size": 0 00:10:59.739 } 00:10:59.739 ] 00:10:59.739 }' 00:10:59.739 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.739 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.998 [2024-11-29 07:42:49.923199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:59.998 [2024-11-29 07:42:49.923540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:59.998 [2024-11-29 07:42:49.923598] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:59.998 [2024-11-29 07:42:49.923902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:59.998 [2024-11-29 07:42:49.924089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:59.998 [2024-11-29 07:42:49.924143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:59.998 BaseBdev4 00:10:59.998 [2024-11-29 07:42:49.924319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.998 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.257 [ 00:11:00.257 { 00:11:00.257 "name": "BaseBdev4", 00:11:00.257 "aliases": [ 00:11:00.257 "46f2bf66-3755-4f84-9b9d-2017bb2c1218" 00:11:00.257 ], 00:11:00.257 "product_name": "Malloc disk", 00:11:00.257 "block_size": 512, 00:11:00.257 "num_blocks": 65536, 00:11:00.257 "uuid": "46f2bf66-3755-4f84-9b9d-2017bb2c1218", 00:11:00.257 "assigned_rate_limits": { 00:11:00.257 "rw_ios_per_sec": 0, 00:11:00.257 "rw_mbytes_per_sec": 0, 00:11:00.257 "r_mbytes_per_sec": 0, 00:11:00.257 "w_mbytes_per_sec": 0 00:11:00.257 }, 00:11:00.257 "claimed": true, 00:11:00.257 "claim_type": "exclusive_write", 00:11:00.257 "zoned": false, 00:11:00.257 "supported_io_types": { 00:11:00.257 "read": true, 00:11:00.257 "write": true, 00:11:00.257 "unmap": true, 00:11:00.257 "flush": true, 00:11:00.257 "reset": true, 00:11:00.257 "nvme_admin": false, 00:11:00.257 "nvme_io": false, 00:11:00.257 "nvme_io_md": false, 00:11:00.257 "write_zeroes": true, 00:11:00.257 "zcopy": true, 00:11:00.257 "get_zone_info": false, 00:11:00.257 "zone_management": false, 00:11:00.257 "zone_append": false, 00:11:00.257 "compare": false, 00:11:00.257 "compare_and_write": false, 00:11:00.257 "abort": true, 00:11:00.257 "seek_hole": false, 00:11:00.257 "seek_data": false, 00:11:00.257 "copy": true, 00:11:00.257 "nvme_iov_md": false 00:11:00.257 }, 00:11:00.257 "memory_domains": [ 00:11:00.257 { 00:11:00.257 "dma_device_id": "system", 00:11:00.257 "dma_device_type": 1 00:11:00.257 }, 00:11:00.257 { 00:11:00.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.257 "dma_device_type": 2 00:11:00.257 } 00:11:00.257 ], 00:11:00.257 "driver_specific": {} 00:11:00.257 } 00:11:00.257 ] 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.257 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.258 07:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.258 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.258 "name": "Existed_Raid", 00:11:00.258 "uuid": "176f1ec3-c413-4e81-ae5d-d51ad1db0dcd", 00:11:00.258 "strip_size_kb": 64, 00:11:00.258 "state": "online", 00:11:00.258 "raid_level": "raid0", 00:11:00.258 "superblock": true, 00:11:00.258 "num_base_bdevs": 4, 00:11:00.258 "num_base_bdevs_discovered": 4, 00:11:00.258 "num_base_bdevs_operational": 4, 00:11:00.258 "base_bdevs_list": [ 00:11:00.258 { 00:11:00.258 "name": "BaseBdev1", 00:11:00.258 "uuid": "daab9d10-8241-4412-9157-66860f44591d", 00:11:00.258 "is_configured": true, 00:11:00.258 "data_offset": 2048, 00:11:00.258 "data_size": 63488 00:11:00.258 }, 00:11:00.258 { 00:11:00.258 "name": "BaseBdev2", 00:11:00.258 "uuid": "2fc587de-c0ae-4ea6-9c7a-bfd70e65ee83", 00:11:00.258 "is_configured": true, 00:11:00.258 "data_offset": 2048, 00:11:00.258 "data_size": 63488 00:11:00.258 }, 00:11:00.258 { 00:11:00.258 "name": "BaseBdev3", 00:11:00.258 "uuid": "250e98b7-c3dc-4e54-8658-80f07fa3f904", 00:11:00.258 "is_configured": true, 00:11:00.258 "data_offset": 2048, 00:11:00.258 "data_size": 63488 00:11:00.258 }, 00:11:00.258 { 00:11:00.258 "name": "BaseBdev4", 00:11:00.258 "uuid": "46f2bf66-3755-4f84-9b9d-2017bb2c1218", 00:11:00.258 "is_configured": true, 00:11:00.258 "data_offset": 2048, 00:11:00.258 "data_size": 63488 00:11:00.258 } 00:11:00.258 ] 00:11:00.258 }' 00:11:00.258 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.258 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.517 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.517 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:00.517 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.517 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.517 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.517 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.517 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:00.517 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.517 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.517 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.517 [2024-11-29 07:42:50.390731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.517 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.517 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.517 "name": "Existed_Raid", 00:11:00.517 "aliases": [ 00:11:00.517 "176f1ec3-c413-4e81-ae5d-d51ad1db0dcd" 00:11:00.517 ], 00:11:00.517 "product_name": "Raid Volume", 00:11:00.517 "block_size": 512, 00:11:00.517 "num_blocks": 253952, 00:11:00.517 "uuid": "176f1ec3-c413-4e81-ae5d-d51ad1db0dcd", 00:11:00.517 "assigned_rate_limits": { 00:11:00.517 "rw_ios_per_sec": 0, 00:11:00.517 "rw_mbytes_per_sec": 0, 00:11:00.517 "r_mbytes_per_sec": 0, 00:11:00.517 "w_mbytes_per_sec": 0 00:11:00.517 }, 00:11:00.517 "claimed": false, 00:11:00.517 "zoned": false, 00:11:00.517 "supported_io_types": { 00:11:00.517 "read": true, 00:11:00.517 "write": true, 00:11:00.517 "unmap": true, 00:11:00.517 "flush": true, 00:11:00.517 "reset": true, 00:11:00.517 "nvme_admin": false, 00:11:00.517 "nvme_io": false, 00:11:00.517 "nvme_io_md": false, 00:11:00.517 "write_zeroes": true, 00:11:00.517 "zcopy": false, 00:11:00.517 "get_zone_info": false, 00:11:00.517 "zone_management": false, 00:11:00.517 "zone_append": false, 00:11:00.517 "compare": false, 00:11:00.517 "compare_and_write": false, 00:11:00.517 "abort": false, 00:11:00.517 "seek_hole": false, 00:11:00.517 "seek_data": false, 00:11:00.517 "copy": false, 00:11:00.517 "nvme_iov_md": false 00:11:00.517 }, 00:11:00.517 "memory_domains": [ 00:11:00.517 { 00:11:00.517 "dma_device_id": "system", 00:11:00.517 "dma_device_type": 1 00:11:00.517 }, 00:11:00.517 { 00:11:00.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.517 "dma_device_type": 2 00:11:00.517 }, 00:11:00.517 { 00:11:00.517 "dma_device_id": "system", 00:11:00.517 "dma_device_type": 1 00:11:00.517 }, 00:11:00.517 { 00:11:00.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.517 "dma_device_type": 2 00:11:00.517 }, 00:11:00.517 { 00:11:00.517 "dma_device_id": "system", 00:11:00.517 "dma_device_type": 1 00:11:00.517 }, 00:11:00.517 { 00:11:00.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.517 "dma_device_type": 2 00:11:00.517 }, 00:11:00.517 { 00:11:00.517 "dma_device_id": "system", 00:11:00.517 "dma_device_type": 1 00:11:00.517 }, 00:11:00.517 { 00:11:00.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.517 "dma_device_type": 2 00:11:00.517 } 00:11:00.517 ], 00:11:00.517 "driver_specific": { 00:11:00.517 "raid": { 00:11:00.517 "uuid": "176f1ec3-c413-4e81-ae5d-d51ad1db0dcd", 00:11:00.517 "strip_size_kb": 64, 00:11:00.517 "state": "online", 00:11:00.517 "raid_level": "raid0", 00:11:00.517 "superblock": true, 00:11:00.517 "num_base_bdevs": 4, 00:11:00.517 "num_base_bdevs_discovered": 4, 00:11:00.517 "num_base_bdevs_operational": 4, 00:11:00.517 "base_bdevs_list": [ 00:11:00.517 { 00:11:00.517 "name": "BaseBdev1", 00:11:00.517 "uuid": "daab9d10-8241-4412-9157-66860f44591d", 00:11:00.517 "is_configured": true, 00:11:00.517 "data_offset": 2048, 00:11:00.517 "data_size": 63488 00:11:00.517 }, 00:11:00.517 { 00:11:00.517 "name": "BaseBdev2", 00:11:00.517 "uuid": "2fc587de-c0ae-4ea6-9c7a-bfd70e65ee83", 00:11:00.517 "is_configured": true, 00:11:00.518 "data_offset": 2048, 00:11:00.518 "data_size": 63488 00:11:00.518 }, 00:11:00.518 { 00:11:00.518 "name": "BaseBdev3", 00:11:00.518 "uuid": "250e98b7-c3dc-4e54-8658-80f07fa3f904", 00:11:00.518 "is_configured": true, 00:11:00.518 "data_offset": 2048, 00:11:00.518 "data_size": 63488 00:11:00.518 }, 00:11:00.518 { 00:11:00.518 "name": "BaseBdev4", 00:11:00.518 "uuid": "46f2bf66-3755-4f84-9b9d-2017bb2c1218", 00:11:00.518 "is_configured": true, 00:11:00.518 "data_offset": 2048, 00:11:00.518 "data_size": 63488 00:11:00.518 } 00:11:00.518 ] 00:11:00.518 } 00:11:00.518 } 00:11:00.518 }' 00:11:00.518 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:00.776 BaseBdev2 00:11:00.776 BaseBdev3 00:11:00.776 BaseBdev4' 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:00.776 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.777 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.777 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.777 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.036 [2024-11-29 07:42:50.733877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:01.036 [2024-11-29 07:42:50.733908] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.036 [2024-11-29 07:42:50.733958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.036 "name": "Existed_Raid", 00:11:01.036 "uuid": "176f1ec3-c413-4e81-ae5d-d51ad1db0dcd", 00:11:01.036 "strip_size_kb": 64, 00:11:01.036 "state": "offline", 00:11:01.036 "raid_level": "raid0", 00:11:01.036 "superblock": true, 00:11:01.036 "num_base_bdevs": 4, 00:11:01.036 "num_base_bdevs_discovered": 3, 00:11:01.036 "num_base_bdevs_operational": 3, 00:11:01.036 "base_bdevs_list": [ 00:11:01.036 { 00:11:01.036 "name": null, 00:11:01.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.036 "is_configured": false, 00:11:01.036 "data_offset": 0, 00:11:01.036 "data_size": 63488 00:11:01.036 }, 00:11:01.036 { 00:11:01.036 "name": "BaseBdev2", 00:11:01.036 "uuid": "2fc587de-c0ae-4ea6-9c7a-bfd70e65ee83", 00:11:01.036 "is_configured": true, 00:11:01.036 "data_offset": 2048, 00:11:01.036 "data_size": 63488 00:11:01.036 }, 00:11:01.036 { 00:11:01.036 "name": "BaseBdev3", 00:11:01.036 "uuid": "250e98b7-c3dc-4e54-8658-80f07fa3f904", 00:11:01.036 "is_configured": true, 00:11:01.036 "data_offset": 2048, 00:11:01.036 "data_size": 63488 00:11:01.036 }, 00:11:01.036 { 00:11:01.036 "name": "BaseBdev4", 00:11:01.036 "uuid": "46f2bf66-3755-4f84-9b9d-2017bb2c1218", 00:11:01.036 "is_configured": true, 00:11:01.036 "data_offset": 2048, 00:11:01.036 "data_size": 63488 00:11:01.036 } 00:11:01.036 ] 00:11:01.036 }' 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.036 07:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.603 [2024-11-29 07:42:51.307950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.603 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.603 [2024-11-29 07:42:51.460980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.864 [2024-11-29 07:42:51.614620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:01.864 [2024-11-29 07:42:51.614670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.864 BaseBdev2 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.864 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.124 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.124 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:02.124 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.124 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.124 [ 00:11:02.124 { 00:11:02.124 "name": "BaseBdev2", 00:11:02.124 "aliases": [ 00:11:02.124 "ee8e2d4e-541e-420b-b65b-56b5731fb795" 00:11:02.124 ], 00:11:02.124 "product_name": "Malloc disk", 00:11:02.124 "block_size": 512, 00:11:02.124 "num_blocks": 65536, 00:11:02.124 "uuid": "ee8e2d4e-541e-420b-b65b-56b5731fb795", 00:11:02.124 "assigned_rate_limits": { 00:11:02.124 "rw_ios_per_sec": 0, 00:11:02.124 "rw_mbytes_per_sec": 0, 00:11:02.124 "r_mbytes_per_sec": 0, 00:11:02.124 "w_mbytes_per_sec": 0 00:11:02.124 }, 00:11:02.124 "claimed": false, 00:11:02.124 "zoned": false, 00:11:02.124 "supported_io_types": { 00:11:02.124 "read": true, 00:11:02.124 "write": true, 00:11:02.124 "unmap": true, 00:11:02.124 "flush": true, 00:11:02.124 "reset": true, 00:11:02.125 "nvme_admin": false, 00:11:02.125 "nvme_io": false, 00:11:02.125 "nvme_io_md": false, 00:11:02.125 "write_zeroes": true, 00:11:02.125 "zcopy": true, 00:11:02.125 "get_zone_info": false, 00:11:02.125 "zone_management": false, 00:11:02.125 "zone_append": false, 00:11:02.125 "compare": false, 00:11:02.125 "compare_and_write": false, 00:11:02.125 "abort": true, 00:11:02.125 "seek_hole": false, 00:11:02.125 "seek_data": false, 00:11:02.125 "copy": true, 00:11:02.125 "nvme_iov_md": false 00:11:02.125 }, 00:11:02.125 "memory_domains": [ 00:11:02.125 { 00:11:02.125 "dma_device_id": "system", 00:11:02.125 "dma_device_type": 1 00:11:02.125 }, 00:11:02.125 { 00:11:02.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.125 "dma_device_type": 2 00:11:02.125 } 00:11:02.125 ], 00:11:02.125 "driver_specific": {} 00:11:02.125 } 00:11:02.125 ] 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.125 BaseBdev3 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.125 [ 00:11:02.125 { 00:11:02.125 "name": "BaseBdev3", 00:11:02.125 "aliases": [ 00:11:02.125 "e7e265aa-cd83-4cc1-960e-0d3d86e76910" 00:11:02.125 ], 00:11:02.125 "product_name": "Malloc disk", 00:11:02.125 "block_size": 512, 00:11:02.125 "num_blocks": 65536, 00:11:02.125 "uuid": "e7e265aa-cd83-4cc1-960e-0d3d86e76910", 00:11:02.125 "assigned_rate_limits": { 00:11:02.125 "rw_ios_per_sec": 0, 00:11:02.125 "rw_mbytes_per_sec": 0, 00:11:02.125 "r_mbytes_per_sec": 0, 00:11:02.125 "w_mbytes_per_sec": 0 00:11:02.125 }, 00:11:02.125 "claimed": false, 00:11:02.125 "zoned": false, 00:11:02.125 "supported_io_types": { 00:11:02.125 "read": true, 00:11:02.125 "write": true, 00:11:02.125 "unmap": true, 00:11:02.125 "flush": true, 00:11:02.125 "reset": true, 00:11:02.125 "nvme_admin": false, 00:11:02.125 "nvme_io": false, 00:11:02.125 "nvme_io_md": false, 00:11:02.125 "write_zeroes": true, 00:11:02.125 "zcopy": true, 00:11:02.125 "get_zone_info": false, 00:11:02.125 "zone_management": false, 00:11:02.125 "zone_append": false, 00:11:02.125 "compare": false, 00:11:02.125 "compare_and_write": false, 00:11:02.125 "abort": true, 00:11:02.125 "seek_hole": false, 00:11:02.125 "seek_data": false, 00:11:02.125 "copy": true, 00:11:02.125 "nvme_iov_md": false 00:11:02.125 }, 00:11:02.125 "memory_domains": [ 00:11:02.125 { 00:11:02.125 "dma_device_id": "system", 00:11:02.125 "dma_device_type": 1 00:11:02.125 }, 00:11:02.125 { 00:11:02.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.125 "dma_device_type": 2 00:11:02.125 } 00:11:02.125 ], 00:11:02.125 "driver_specific": {} 00:11:02.125 } 00:11:02.125 ] 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.125 BaseBdev4 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.125 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.125 [ 00:11:02.125 { 00:11:02.125 "name": "BaseBdev4", 00:11:02.125 "aliases": [ 00:11:02.125 "98030df0-1a52-4d77-a74c-e88a84bb0a46" 00:11:02.125 ], 00:11:02.125 "product_name": "Malloc disk", 00:11:02.125 "block_size": 512, 00:11:02.125 "num_blocks": 65536, 00:11:02.125 "uuid": "98030df0-1a52-4d77-a74c-e88a84bb0a46", 00:11:02.125 "assigned_rate_limits": { 00:11:02.125 "rw_ios_per_sec": 0, 00:11:02.125 "rw_mbytes_per_sec": 0, 00:11:02.125 "r_mbytes_per_sec": 0, 00:11:02.125 "w_mbytes_per_sec": 0 00:11:02.125 }, 00:11:02.125 "claimed": false, 00:11:02.125 "zoned": false, 00:11:02.125 "supported_io_types": { 00:11:02.125 "read": true, 00:11:02.125 "write": true, 00:11:02.125 "unmap": true, 00:11:02.125 "flush": true, 00:11:02.125 "reset": true, 00:11:02.125 "nvme_admin": false, 00:11:02.125 "nvme_io": false, 00:11:02.125 "nvme_io_md": false, 00:11:02.126 "write_zeroes": true, 00:11:02.126 "zcopy": true, 00:11:02.126 "get_zone_info": false, 00:11:02.126 "zone_management": false, 00:11:02.126 "zone_append": false, 00:11:02.126 "compare": false, 00:11:02.126 "compare_and_write": false, 00:11:02.126 "abort": true, 00:11:02.126 "seek_hole": false, 00:11:02.126 "seek_data": false, 00:11:02.126 "copy": true, 00:11:02.126 "nvme_iov_md": false 00:11:02.126 }, 00:11:02.126 "memory_domains": [ 00:11:02.126 { 00:11:02.126 "dma_device_id": "system", 00:11:02.126 "dma_device_type": 1 00:11:02.126 }, 00:11:02.126 { 00:11:02.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.126 "dma_device_type": 2 00:11:02.126 } 00:11:02.126 ], 00:11:02.126 "driver_specific": {} 00:11:02.126 } 00:11:02.126 ] 00:11:02.126 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.126 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.126 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:02.126 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:02.126 07:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.126 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.126 07:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 [2024-11-29 07:42:52.004444] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.126 [2024-11-29 07:42:52.004545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.126 [2024-11-29 07:42:52.004594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.126 [2024-11-29 07:42:52.006550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.126 [2024-11-29 07:42:52.006604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.126 "name": "Existed_Raid", 00:11:02.126 "uuid": "2e0b0ab3-80dc-47fa-8cbc-0bf048831135", 00:11:02.126 "strip_size_kb": 64, 00:11:02.126 "state": "configuring", 00:11:02.126 "raid_level": "raid0", 00:11:02.126 "superblock": true, 00:11:02.126 "num_base_bdevs": 4, 00:11:02.126 "num_base_bdevs_discovered": 3, 00:11:02.126 "num_base_bdevs_operational": 4, 00:11:02.126 "base_bdevs_list": [ 00:11:02.126 { 00:11:02.126 "name": "BaseBdev1", 00:11:02.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.126 "is_configured": false, 00:11:02.126 "data_offset": 0, 00:11:02.126 "data_size": 0 00:11:02.126 }, 00:11:02.126 { 00:11:02.126 "name": "BaseBdev2", 00:11:02.126 "uuid": "ee8e2d4e-541e-420b-b65b-56b5731fb795", 00:11:02.126 "is_configured": true, 00:11:02.126 "data_offset": 2048, 00:11:02.126 "data_size": 63488 00:11:02.126 }, 00:11:02.126 { 00:11:02.126 "name": "BaseBdev3", 00:11:02.126 "uuid": "e7e265aa-cd83-4cc1-960e-0d3d86e76910", 00:11:02.126 "is_configured": true, 00:11:02.126 "data_offset": 2048, 00:11:02.126 "data_size": 63488 00:11:02.126 }, 00:11:02.126 { 00:11:02.126 "name": "BaseBdev4", 00:11:02.126 "uuid": "98030df0-1a52-4d77-a74c-e88a84bb0a46", 00:11:02.126 "is_configured": true, 00:11:02.126 "data_offset": 2048, 00:11:02.126 "data_size": 63488 00:11:02.126 } 00:11:02.126 ] 00:11:02.126 }' 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.126 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.694 [2024-11-29 07:42:52.463704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.694 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.694 "name": "Existed_Raid", 00:11:02.694 "uuid": "2e0b0ab3-80dc-47fa-8cbc-0bf048831135", 00:11:02.694 "strip_size_kb": 64, 00:11:02.694 "state": "configuring", 00:11:02.694 "raid_level": "raid0", 00:11:02.694 "superblock": true, 00:11:02.694 "num_base_bdevs": 4, 00:11:02.694 "num_base_bdevs_discovered": 2, 00:11:02.694 "num_base_bdevs_operational": 4, 00:11:02.694 "base_bdevs_list": [ 00:11:02.694 { 00:11:02.694 "name": "BaseBdev1", 00:11:02.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.694 "is_configured": false, 00:11:02.694 "data_offset": 0, 00:11:02.694 "data_size": 0 00:11:02.694 }, 00:11:02.694 { 00:11:02.694 "name": null, 00:11:02.694 "uuid": "ee8e2d4e-541e-420b-b65b-56b5731fb795", 00:11:02.694 "is_configured": false, 00:11:02.694 "data_offset": 0, 00:11:02.694 "data_size": 63488 00:11:02.694 }, 00:11:02.694 { 00:11:02.694 "name": "BaseBdev3", 00:11:02.694 "uuid": "e7e265aa-cd83-4cc1-960e-0d3d86e76910", 00:11:02.694 "is_configured": true, 00:11:02.694 "data_offset": 2048, 00:11:02.694 "data_size": 63488 00:11:02.695 }, 00:11:02.695 { 00:11:02.695 "name": "BaseBdev4", 00:11:02.695 "uuid": "98030df0-1a52-4d77-a74c-e88a84bb0a46", 00:11:02.695 "is_configured": true, 00:11:02.695 "data_offset": 2048, 00:11:02.695 "data_size": 63488 00:11:02.695 } 00:11:02.695 ] 00:11:02.695 }' 00:11:02.695 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.695 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.953 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.953 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.953 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.953 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:02.953 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.212 [2024-11-29 07:42:52.951943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.212 BaseBdev1 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.212 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.212 [ 00:11:03.212 { 00:11:03.212 "name": "BaseBdev1", 00:11:03.212 "aliases": [ 00:11:03.212 "0722f4a9-4537-40c4-bf28-b685314e7c22" 00:11:03.212 ], 00:11:03.212 "product_name": "Malloc disk", 00:11:03.212 "block_size": 512, 00:11:03.212 "num_blocks": 65536, 00:11:03.212 "uuid": "0722f4a9-4537-40c4-bf28-b685314e7c22", 00:11:03.212 "assigned_rate_limits": { 00:11:03.212 "rw_ios_per_sec": 0, 00:11:03.212 "rw_mbytes_per_sec": 0, 00:11:03.212 "r_mbytes_per_sec": 0, 00:11:03.212 "w_mbytes_per_sec": 0 00:11:03.212 }, 00:11:03.212 "claimed": true, 00:11:03.212 "claim_type": "exclusive_write", 00:11:03.212 "zoned": false, 00:11:03.212 "supported_io_types": { 00:11:03.212 "read": true, 00:11:03.212 "write": true, 00:11:03.212 "unmap": true, 00:11:03.212 "flush": true, 00:11:03.212 "reset": true, 00:11:03.212 "nvme_admin": false, 00:11:03.212 "nvme_io": false, 00:11:03.212 "nvme_io_md": false, 00:11:03.212 "write_zeroes": true, 00:11:03.213 "zcopy": true, 00:11:03.213 "get_zone_info": false, 00:11:03.213 "zone_management": false, 00:11:03.213 "zone_append": false, 00:11:03.213 "compare": false, 00:11:03.213 "compare_and_write": false, 00:11:03.213 "abort": true, 00:11:03.213 "seek_hole": false, 00:11:03.213 "seek_data": false, 00:11:03.213 "copy": true, 00:11:03.213 "nvme_iov_md": false 00:11:03.213 }, 00:11:03.213 "memory_domains": [ 00:11:03.213 { 00:11:03.213 "dma_device_id": "system", 00:11:03.213 "dma_device_type": 1 00:11:03.213 }, 00:11:03.213 { 00:11:03.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.213 "dma_device_type": 2 00:11:03.213 } 00:11:03.213 ], 00:11:03.213 "driver_specific": {} 00:11:03.213 } 00:11:03.213 ] 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.213 07:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.213 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.213 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.213 "name": "Existed_Raid", 00:11:03.213 "uuid": "2e0b0ab3-80dc-47fa-8cbc-0bf048831135", 00:11:03.213 "strip_size_kb": 64, 00:11:03.213 "state": "configuring", 00:11:03.213 "raid_level": "raid0", 00:11:03.213 "superblock": true, 00:11:03.213 "num_base_bdevs": 4, 00:11:03.213 "num_base_bdevs_discovered": 3, 00:11:03.213 "num_base_bdevs_operational": 4, 00:11:03.213 "base_bdevs_list": [ 00:11:03.213 { 00:11:03.213 "name": "BaseBdev1", 00:11:03.213 "uuid": "0722f4a9-4537-40c4-bf28-b685314e7c22", 00:11:03.213 "is_configured": true, 00:11:03.213 "data_offset": 2048, 00:11:03.213 "data_size": 63488 00:11:03.213 }, 00:11:03.213 { 00:11:03.213 "name": null, 00:11:03.213 "uuid": "ee8e2d4e-541e-420b-b65b-56b5731fb795", 00:11:03.213 "is_configured": false, 00:11:03.213 "data_offset": 0, 00:11:03.213 "data_size": 63488 00:11:03.213 }, 00:11:03.213 { 00:11:03.213 "name": "BaseBdev3", 00:11:03.213 "uuid": "e7e265aa-cd83-4cc1-960e-0d3d86e76910", 00:11:03.213 "is_configured": true, 00:11:03.213 "data_offset": 2048, 00:11:03.213 "data_size": 63488 00:11:03.213 }, 00:11:03.213 { 00:11:03.213 "name": "BaseBdev4", 00:11:03.213 "uuid": "98030df0-1a52-4d77-a74c-e88a84bb0a46", 00:11:03.213 "is_configured": true, 00:11:03.213 "data_offset": 2048, 00:11:03.213 "data_size": 63488 00:11:03.213 } 00:11:03.213 ] 00:11:03.213 }' 00:11:03.213 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.213 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.781 [2024-11-29 07:42:53.515194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.781 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.781 "name": "Existed_Raid", 00:11:03.781 "uuid": "2e0b0ab3-80dc-47fa-8cbc-0bf048831135", 00:11:03.781 "strip_size_kb": 64, 00:11:03.781 "state": "configuring", 00:11:03.781 "raid_level": "raid0", 00:11:03.781 "superblock": true, 00:11:03.781 "num_base_bdevs": 4, 00:11:03.781 "num_base_bdevs_discovered": 2, 00:11:03.781 "num_base_bdevs_operational": 4, 00:11:03.781 "base_bdevs_list": [ 00:11:03.781 { 00:11:03.781 "name": "BaseBdev1", 00:11:03.781 "uuid": "0722f4a9-4537-40c4-bf28-b685314e7c22", 00:11:03.781 "is_configured": true, 00:11:03.781 "data_offset": 2048, 00:11:03.781 "data_size": 63488 00:11:03.781 }, 00:11:03.781 { 00:11:03.781 "name": null, 00:11:03.781 "uuid": "ee8e2d4e-541e-420b-b65b-56b5731fb795", 00:11:03.781 "is_configured": false, 00:11:03.781 "data_offset": 0, 00:11:03.781 "data_size": 63488 00:11:03.781 }, 00:11:03.781 { 00:11:03.781 "name": null, 00:11:03.781 "uuid": "e7e265aa-cd83-4cc1-960e-0d3d86e76910", 00:11:03.782 "is_configured": false, 00:11:03.782 "data_offset": 0, 00:11:03.782 "data_size": 63488 00:11:03.782 }, 00:11:03.782 { 00:11:03.782 "name": "BaseBdev4", 00:11:03.782 "uuid": "98030df0-1a52-4d77-a74c-e88a84bb0a46", 00:11:03.782 "is_configured": true, 00:11:03.782 "data_offset": 2048, 00:11:03.782 "data_size": 63488 00:11:03.782 } 00:11:03.782 ] 00:11:03.782 }' 00:11:03.782 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.782 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.041 [2024-11-29 07:42:53.966358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.041 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.301 07:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.301 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.301 "name": "Existed_Raid", 00:11:04.301 "uuid": "2e0b0ab3-80dc-47fa-8cbc-0bf048831135", 00:11:04.301 "strip_size_kb": 64, 00:11:04.301 "state": "configuring", 00:11:04.301 "raid_level": "raid0", 00:11:04.301 "superblock": true, 00:11:04.301 "num_base_bdevs": 4, 00:11:04.301 "num_base_bdevs_discovered": 3, 00:11:04.301 "num_base_bdevs_operational": 4, 00:11:04.301 "base_bdevs_list": [ 00:11:04.301 { 00:11:04.301 "name": "BaseBdev1", 00:11:04.301 "uuid": "0722f4a9-4537-40c4-bf28-b685314e7c22", 00:11:04.301 "is_configured": true, 00:11:04.301 "data_offset": 2048, 00:11:04.301 "data_size": 63488 00:11:04.301 }, 00:11:04.301 { 00:11:04.301 "name": null, 00:11:04.301 "uuid": "ee8e2d4e-541e-420b-b65b-56b5731fb795", 00:11:04.301 "is_configured": false, 00:11:04.301 "data_offset": 0, 00:11:04.301 "data_size": 63488 00:11:04.301 }, 00:11:04.301 { 00:11:04.301 "name": "BaseBdev3", 00:11:04.301 "uuid": "e7e265aa-cd83-4cc1-960e-0d3d86e76910", 00:11:04.301 "is_configured": true, 00:11:04.301 "data_offset": 2048, 00:11:04.301 "data_size": 63488 00:11:04.301 }, 00:11:04.301 { 00:11:04.301 "name": "BaseBdev4", 00:11:04.301 "uuid": "98030df0-1a52-4d77-a74c-e88a84bb0a46", 00:11:04.301 "is_configured": true, 00:11:04.301 "data_offset": 2048, 00:11:04.301 "data_size": 63488 00:11:04.301 } 00:11:04.301 ] 00:11:04.301 }' 00:11:04.301 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.301 07:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.563 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.563 07:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.563 07:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.563 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:04.563 07:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.563 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:04.563 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:04.563 07:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.563 07:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.563 [2024-11-29 07:42:54.461571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.822 "name": "Existed_Raid", 00:11:04.822 "uuid": "2e0b0ab3-80dc-47fa-8cbc-0bf048831135", 00:11:04.822 "strip_size_kb": 64, 00:11:04.822 "state": "configuring", 00:11:04.822 "raid_level": "raid0", 00:11:04.822 "superblock": true, 00:11:04.822 "num_base_bdevs": 4, 00:11:04.822 "num_base_bdevs_discovered": 2, 00:11:04.822 "num_base_bdevs_operational": 4, 00:11:04.822 "base_bdevs_list": [ 00:11:04.822 { 00:11:04.822 "name": null, 00:11:04.822 "uuid": "0722f4a9-4537-40c4-bf28-b685314e7c22", 00:11:04.822 "is_configured": false, 00:11:04.822 "data_offset": 0, 00:11:04.822 "data_size": 63488 00:11:04.822 }, 00:11:04.822 { 00:11:04.822 "name": null, 00:11:04.822 "uuid": "ee8e2d4e-541e-420b-b65b-56b5731fb795", 00:11:04.822 "is_configured": false, 00:11:04.822 "data_offset": 0, 00:11:04.822 "data_size": 63488 00:11:04.822 }, 00:11:04.822 { 00:11:04.822 "name": "BaseBdev3", 00:11:04.822 "uuid": "e7e265aa-cd83-4cc1-960e-0d3d86e76910", 00:11:04.822 "is_configured": true, 00:11:04.822 "data_offset": 2048, 00:11:04.822 "data_size": 63488 00:11:04.822 }, 00:11:04.822 { 00:11:04.822 "name": "BaseBdev4", 00:11:04.822 "uuid": "98030df0-1a52-4d77-a74c-e88a84bb0a46", 00:11:04.822 "is_configured": true, 00:11:04.822 "data_offset": 2048, 00:11:04.822 "data_size": 63488 00:11:04.822 } 00:11:04.822 ] 00:11:04.822 }' 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.822 07:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.396 [2024-11-29 07:42:55.089414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.396 "name": "Existed_Raid", 00:11:05.396 "uuid": "2e0b0ab3-80dc-47fa-8cbc-0bf048831135", 00:11:05.396 "strip_size_kb": 64, 00:11:05.396 "state": "configuring", 00:11:05.396 "raid_level": "raid0", 00:11:05.396 "superblock": true, 00:11:05.396 "num_base_bdevs": 4, 00:11:05.396 "num_base_bdevs_discovered": 3, 00:11:05.396 "num_base_bdevs_operational": 4, 00:11:05.396 "base_bdevs_list": [ 00:11:05.396 { 00:11:05.396 "name": null, 00:11:05.396 "uuid": "0722f4a9-4537-40c4-bf28-b685314e7c22", 00:11:05.396 "is_configured": false, 00:11:05.396 "data_offset": 0, 00:11:05.396 "data_size": 63488 00:11:05.396 }, 00:11:05.396 { 00:11:05.396 "name": "BaseBdev2", 00:11:05.396 "uuid": "ee8e2d4e-541e-420b-b65b-56b5731fb795", 00:11:05.396 "is_configured": true, 00:11:05.396 "data_offset": 2048, 00:11:05.396 "data_size": 63488 00:11:05.396 }, 00:11:05.396 { 00:11:05.396 "name": "BaseBdev3", 00:11:05.396 "uuid": "e7e265aa-cd83-4cc1-960e-0d3d86e76910", 00:11:05.396 "is_configured": true, 00:11:05.396 "data_offset": 2048, 00:11:05.396 "data_size": 63488 00:11:05.396 }, 00:11:05.396 { 00:11:05.396 "name": "BaseBdev4", 00:11:05.396 "uuid": "98030df0-1a52-4d77-a74c-e88a84bb0a46", 00:11:05.396 "is_configured": true, 00:11:05.396 "data_offset": 2048, 00:11:05.396 "data_size": 63488 00:11:05.396 } 00:11:05.396 ] 00:11:05.396 }' 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.396 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.656 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:05.656 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.656 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.656 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.656 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.656 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:05.656 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0722f4a9-4537-40c4-bf28-b685314e7c22 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.916 [2024-11-29 07:42:55.680732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:05.916 [2024-11-29 07:42:55.680985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:05.916 [2024-11-29 07:42:55.680998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:05.916 NewBaseBdev 00:11:05.916 [2024-11-29 07:42:55.681294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:05.916 [2024-11-29 07:42:55.681449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:05.916 [2024-11-29 07:42:55.681460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:05.916 [2024-11-29 07:42:55.681582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.916 [ 00:11:05.916 { 00:11:05.916 "name": "NewBaseBdev", 00:11:05.916 "aliases": [ 00:11:05.916 "0722f4a9-4537-40c4-bf28-b685314e7c22" 00:11:05.916 ], 00:11:05.916 "product_name": "Malloc disk", 00:11:05.916 "block_size": 512, 00:11:05.916 "num_blocks": 65536, 00:11:05.916 "uuid": "0722f4a9-4537-40c4-bf28-b685314e7c22", 00:11:05.916 "assigned_rate_limits": { 00:11:05.916 "rw_ios_per_sec": 0, 00:11:05.916 "rw_mbytes_per_sec": 0, 00:11:05.916 "r_mbytes_per_sec": 0, 00:11:05.916 "w_mbytes_per_sec": 0 00:11:05.916 }, 00:11:05.916 "claimed": true, 00:11:05.916 "claim_type": "exclusive_write", 00:11:05.916 "zoned": false, 00:11:05.916 "supported_io_types": { 00:11:05.916 "read": true, 00:11:05.916 "write": true, 00:11:05.916 "unmap": true, 00:11:05.916 "flush": true, 00:11:05.916 "reset": true, 00:11:05.916 "nvme_admin": false, 00:11:05.916 "nvme_io": false, 00:11:05.916 "nvme_io_md": false, 00:11:05.916 "write_zeroes": true, 00:11:05.916 "zcopy": true, 00:11:05.916 "get_zone_info": false, 00:11:05.916 "zone_management": false, 00:11:05.916 "zone_append": false, 00:11:05.916 "compare": false, 00:11:05.916 "compare_and_write": false, 00:11:05.916 "abort": true, 00:11:05.916 "seek_hole": false, 00:11:05.916 "seek_data": false, 00:11:05.916 "copy": true, 00:11:05.916 "nvme_iov_md": false 00:11:05.916 }, 00:11:05.916 "memory_domains": [ 00:11:05.916 { 00:11:05.916 "dma_device_id": "system", 00:11:05.916 "dma_device_type": 1 00:11:05.916 }, 00:11:05.916 { 00:11:05.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.916 "dma_device_type": 2 00:11:05.916 } 00:11:05.916 ], 00:11:05.916 "driver_specific": {} 00:11:05.916 } 00:11:05.916 ] 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.916 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.916 "name": "Existed_Raid", 00:11:05.916 "uuid": "2e0b0ab3-80dc-47fa-8cbc-0bf048831135", 00:11:05.916 "strip_size_kb": 64, 00:11:05.916 "state": "online", 00:11:05.916 "raid_level": "raid0", 00:11:05.916 "superblock": true, 00:11:05.916 "num_base_bdevs": 4, 00:11:05.916 "num_base_bdevs_discovered": 4, 00:11:05.916 "num_base_bdevs_operational": 4, 00:11:05.916 "base_bdevs_list": [ 00:11:05.916 { 00:11:05.916 "name": "NewBaseBdev", 00:11:05.916 "uuid": "0722f4a9-4537-40c4-bf28-b685314e7c22", 00:11:05.916 "is_configured": true, 00:11:05.916 "data_offset": 2048, 00:11:05.916 "data_size": 63488 00:11:05.916 }, 00:11:05.916 { 00:11:05.916 "name": "BaseBdev2", 00:11:05.916 "uuid": "ee8e2d4e-541e-420b-b65b-56b5731fb795", 00:11:05.916 "is_configured": true, 00:11:05.916 "data_offset": 2048, 00:11:05.916 "data_size": 63488 00:11:05.916 }, 00:11:05.916 { 00:11:05.916 "name": "BaseBdev3", 00:11:05.916 "uuid": "e7e265aa-cd83-4cc1-960e-0d3d86e76910", 00:11:05.916 "is_configured": true, 00:11:05.916 "data_offset": 2048, 00:11:05.916 "data_size": 63488 00:11:05.916 }, 00:11:05.916 { 00:11:05.916 "name": "BaseBdev4", 00:11:05.917 "uuid": "98030df0-1a52-4d77-a74c-e88a84bb0a46", 00:11:05.917 "is_configured": true, 00:11:05.917 "data_offset": 2048, 00:11:05.917 "data_size": 63488 00:11:05.917 } 00:11:05.917 ] 00:11:05.917 }' 00:11:05.917 07:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.917 07:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.485 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:06.485 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:06.485 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.485 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.485 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.485 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.485 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:06.485 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.485 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.485 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.485 [2024-11-29 07:42:56.144358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.485 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.485 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.485 "name": "Existed_Raid", 00:11:06.485 "aliases": [ 00:11:06.485 "2e0b0ab3-80dc-47fa-8cbc-0bf048831135" 00:11:06.485 ], 00:11:06.485 "product_name": "Raid Volume", 00:11:06.485 "block_size": 512, 00:11:06.485 "num_blocks": 253952, 00:11:06.485 "uuid": "2e0b0ab3-80dc-47fa-8cbc-0bf048831135", 00:11:06.485 "assigned_rate_limits": { 00:11:06.485 "rw_ios_per_sec": 0, 00:11:06.485 "rw_mbytes_per_sec": 0, 00:11:06.485 "r_mbytes_per_sec": 0, 00:11:06.485 "w_mbytes_per_sec": 0 00:11:06.485 }, 00:11:06.485 "claimed": false, 00:11:06.485 "zoned": false, 00:11:06.485 "supported_io_types": { 00:11:06.485 "read": true, 00:11:06.485 "write": true, 00:11:06.485 "unmap": true, 00:11:06.485 "flush": true, 00:11:06.485 "reset": true, 00:11:06.485 "nvme_admin": false, 00:11:06.485 "nvme_io": false, 00:11:06.485 "nvme_io_md": false, 00:11:06.485 "write_zeroes": true, 00:11:06.485 "zcopy": false, 00:11:06.485 "get_zone_info": false, 00:11:06.485 "zone_management": false, 00:11:06.485 "zone_append": false, 00:11:06.485 "compare": false, 00:11:06.485 "compare_and_write": false, 00:11:06.485 "abort": false, 00:11:06.485 "seek_hole": false, 00:11:06.485 "seek_data": false, 00:11:06.485 "copy": false, 00:11:06.485 "nvme_iov_md": false 00:11:06.485 }, 00:11:06.485 "memory_domains": [ 00:11:06.485 { 00:11:06.485 "dma_device_id": "system", 00:11:06.485 "dma_device_type": 1 00:11:06.485 }, 00:11:06.485 { 00:11:06.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.485 "dma_device_type": 2 00:11:06.485 }, 00:11:06.485 { 00:11:06.485 "dma_device_id": "system", 00:11:06.485 "dma_device_type": 1 00:11:06.485 }, 00:11:06.485 { 00:11:06.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.485 "dma_device_type": 2 00:11:06.485 }, 00:11:06.485 { 00:11:06.485 "dma_device_id": "system", 00:11:06.485 "dma_device_type": 1 00:11:06.485 }, 00:11:06.485 { 00:11:06.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.485 "dma_device_type": 2 00:11:06.485 }, 00:11:06.485 { 00:11:06.485 "dma_device_id": "system", 00:11:06.485 "dma_device_type": 1 00:11:06.485 }, 00:11:06.485 { 00:11:06.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.485 "dma_device_type": 2 00:11:06.485 } 00:11:06.485 ], 00:11:06.485 "driver_specific": { 00:11:06.485 "raid": { 00:11:06.485 "uuid": "2e0b0ab3-80dc-47fa-8cbc-0bf048831135", 00:11:06.485 "strip_size_kb": 64, 00:11:06.485 "state": "online", 00:11:06.485 "raid_level": "raid0", 00:11:06.485 "superblock": true, 00:11:06.485 "num_base_bdevs": 4, 00:11:06.485 "num_base_bdevs_discovered": 4, 00:11:06.485 "num_base_bdevs_operational": 4, 00:11:06.485 "base_bdevs_list": [ 00:11:06.485 { 00:11:06.485 "name": "NewBaseBdev", 00:11:06.485 "uuid": "0722f4a9-4537-40c4-bf28-b685314e7c22", 00:11:06.485 "is_configured": true, 00:11:06.485 "data_offset": 2048, 00:11:06.485 "data_size": 63488 00:11:06.485 }, 00:11:06.485 { 00:11:06.485 "name": "BaseBdev2", 00:11:06.485 "uuid": "ee8e2d4e-541e-420b-b65b-56b5731fb795", 00:11:06.485 "is_configured": true, 00:11:06.485 "data_offset": 2048, 00:11:06.485 "data_size": 63488 00:11:06.485 }, 00:11:06.485 { 00:11:06.485 "name": "BaseBdev3", 00:11:06.485 "uuid": "e7e265aa-cd83-4cc1-960e-0d3d86e76910", 00:11:06.485 "is_configured": true, 00:11:06.485 "data_offset": 2048, 00:11:06.485 "data_size": 63488 00:11:06.485 }, 00:11:06.485 { 00:11:06.485 "name": "BaseBdev4", 00:11:06.486 "uuid": "98030df0-1a52-4d77-a74c-e88a84bb0a46", 00:11:06.486 "is_configured": true, 00:11:06.486 "data_offset": 2048, 00:11:06.486 "data_size": 63488 00:11:06.486 } 00:11:06.486 ] 00:11:06.486 } 00:11:06.486 } 00:11:06.486 }' 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:06.486 BaseBdev2 00:11:06.486 BaseBdev3 00:11:06.486 BaseBdev4' 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.486 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.745 [2024-11-29 07:42:56.491437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.745 [2024-11-29 07:42:56.491511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.745 [2024-11-29 07:42:56.491609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.745 [2024-11-29 07:42:56.491712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.745 [2024-11-29 07:42:56.491766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69828 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69828 ']' 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69828 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69828 00:11:06.745 killing process with pid 69828 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69828' 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69828 00:11:06.745 [2024-11-29 07:42:56.538139] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.745 07:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69828 00:11:07.005 [2024-11-29 07:42:56.929202] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.384 07:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:08.384 00:11:08.384 real 0m11.603s 00:11:08.384 user 0m18.480s 00:11:08.384 sys 0m2.060s 00:11:08.384 07:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.384 ************************************ 00:11:08.384 END TEST raid_state_function_test_sb 00:11:08.384 ************************************ 00:11:08.384 07:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.384 07:42:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:08.384 07:42:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:08.384 07:42:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.384 07:42:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.384 ************************************ 00:11:08.384 START TEST raid_superblock_test 00:11:08.384 ************************************ 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70498 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70498 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70498 ']' 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.384 07:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.384 [2024-11-29 07:42:58.196309] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:08.384 [2024-11-29 07:42:58.196524] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70498 ] 00:11:08.643 [2024-11-29 07:42:58.351607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.643 [2024-11-29 07:42:58.465282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.902 [2024-11-29 07:42:58.667229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.902 [2024-11-29 07:42:58.667268] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.166 malloc1 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.166 [2024-11-29 07:42:59.078121] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:09.166 [2024-11-29 07:42:59.078235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.166 [2024-11-29 07:42:59.078275] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:09.166 [2024-11-29 07:42:59.078303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.166 [2024-11-29 07:42:59.080371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.166 [2024-11-29 07:42:59.080445] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:09.166 pt1 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.166 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:09.167 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.167 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.436 malloc2 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.436 [2024-11-29 07:42:59.137280] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.436 [2024-11-29 07:42:59.137379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.436 [2024-11-29 07:42:59.137422] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:09.436 [2024-11-29 07:42:59.137450] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.436 [2024-11-29 07:42:59.139497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.436 [2024-11-29 07:42:59.139568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.436 pt2 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.436 malloc3 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.436 [2024-11-29 07:42:59.206345] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:09.436 [2024-11-29 07:42:59.206467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.436 [2024-11-29 07:42:59.206506] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:09.436 [2024-11-29 07:42:59.206534] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.436 [2024-11-29 07:42:59.208625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.436 [2024-11-29 07:42:59.208697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:09.436 pt3 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.436 malloc4 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.436 [2024-11-29 07:42:59.264241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:09.436 [2024-11-29 07:42:59.264326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.436 [2024-11-29 07:42:59.264351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:09.436 [2024-11-29 07:42:59.264360] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.436 [2024-11-29 07:42:59.266557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.436 [2024-11-29 07:42:59.266596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:09.436 pt4 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.436 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.436 [2024-11-29 07:42:59.276268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:09.436 [2024-11-29 07:42:59.278095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.436 [2024-11-29 07:42:59.278214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:09.436 [2024-11-29 07:42:59.278268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:09.436 [2024-11-29 07:42:59.278453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:09.436 [2024-11-29 07:42:59.278470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:09.437 [2024-11-29 07:42:59.278740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:09.437 [2024-11-29 07:42:59.278919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:09.437 [2024-11-29 07:42:59.278932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:09.437 [2024-11-29 07:42:59.279126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.437 "name": "raid_bdev1", 00:11:09.437 "uuid": "f5e6dc8d-1865-44db-8c30-9fe0181b43b4", 00:11:09.437 "strip_size_kb": 64, 00:11:09.437 "state": "online", 00:11:09.437 "raid_level": "raid0", 00:11:09.437 "superblock": true, 00:11:09.437 "num_base_bdevs": 4, 00:11:09.437 "num_base_bdevs_discovered": 4, 00:11:09.437 "num_base_bdevs_operational": 4, 00:11:09.437 "base_bdevs_list": [ 00:11:09.437 { 00:11:09.437 "name": "pt1", 00:11:09.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.437 "is_configured": true, 00:11:09.437 "data_offset": 2048, 00:11:09.437 "data_size": 63488 00:11:09.437 }, 00:11:09.437 { 00:11:09.437 "name": "pt2", 00:11:09.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.437 "is_configured": true, 00:11:09.437 "data_offset": 2048, 00:11:09.437 "data_size": 63488 00:11:09.437 }, 00:11:09.437 { 00:11:09.437 "name": "pt3", 00:11:09.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.437 "is_configured": true, 00:11:09.437 "data_offset": 2048, 00:11:09.437 "data_size": 63488 00:11:09.437 }, 00:11:09.437 { 00:11:09.437 "name": "pt4", 00:11:09.437 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.437 "is_configured": true, 00:11:09.437 "data_offset": 2048, 00:11:09.437 "data_size": 63488 00:11:09.437 } 00:11:09.437 ] 00:11:09.437 }' 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.437 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.007 [2024-11-29 07:42:59.747820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.007 "name": "raid_bdev1", 00:11:10.007 "aliases": [ 00:11:10.007 "f5e6dc8d-1865-44db-8c30-9fe0181b43b4" 00:11:10.007 ], 00:11:10.007 "product_name": "Raid Volume", 00:11:10.007 "block_size": 512, 00:11:10.007 "num_blocks": 253952, 00:11:10.007 "uuid": "f5e6dc8d-1865-44db-8c30-9fe0181b43b4", 00:11:10.007 "assigned_rate_limits": { 00:11:10.007 "rw_ios_per_sec": 0, 00:11:10.007 "rw_mbytes_per_sec": 0, 00:11:10.007 "r_mbytes_per_sec": 0, 00:11:10.007 "w_mbytes_per_sec": 0 00:11:10.007 }, 00:11:10.007 "claimed": false, 00:11:10.007 "zoned": false, 00:11:10.007 "supported_io_types": { 00:11:10.007 "read": true, 00:11:10.007 "write": true, 00:11:10.007 "unmap": true, 00:11:10.007 "flush": true, 00:11:10.007 "reset": true, 00:11:10.007 "nvme_admin": false, 00:11:10.007 "nvme_io": false, 00:11:10.007 "nvme_io_md": false, 00:11:10.007 "write_zeroes": true, 00:11:10.007 "zcopy": false, 00:11:10.007 "get_zone_info": false, 00:11:10.007 "zone_management": false, 00:11:10.007 "zone_append": false, 00:11:10.007 "compare": false, 00:11:10.007 "compare_and_write": false, 00:11:10.007 "abort": false, 00:11:10.007 "seek_hole": false, 00:11:10.007 "seek_data": false, 00:11:10.007 "copy": false, 00:11:10.007 "nvme_iov_md": false 00:11:10.007 }, 00:11:10.007 "memory_domains": [ 00:11:10.007 { 00:11:10.007 "dma_device_id": "system", 00:11:10.007 "dma_device_type": 1 00:11:10.007 }, 00:11:10.007 { 00:11:10.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.007 "dma_device_type": 2 00:11:10.007 }, 00:11:10.007 { 00:11:10.007 "dma_device_id": "system", 00:11:10.007 "dma_device_type": 1 00:11:10.007 }, 00:11:10.007 { 00:11:10.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.007 "dma_device_type": 2 00:11:10.007 }, 00:11:10.007 { 00:11:10.007 "dma_device_id": "system", 00:11:10.007 "dma_device_type": 1 00:11:10.007 }, 00:11:10.007 { 00:11:10.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.007 "dma_device_type": 2 00:11:10.007 }, 00:11:10.007 { 00:11:10.007 "dma_device_id": "system", 00:11:10.007 "dma_device_type": 1 00:11:10.007 }, 00:11:10.007 { 00:11:10.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.007 "dma_device_type": 2 00:11:10.007 } 00:11:10.007 ], 00:11:10.007 "driver_specific": { 00:11:10.007 "raid": { 00:11:10.007 "uuid": "f5e6dc8d-1865-44db-8c30-9fe0181b43b4", 00:11:10.007 "strip_size_kb": 64, 00:11:10.007 "state": "online", 00:11:10.007 "raid_level": "raid0", 00:11:10.007 "superblock": true, 00:11:10.007 "num_base_bdevs": 4, 00:11:10.007 "num_base_bdevs_discovered": 4, 00:11:10.007 "num_base_bdevs_operational": 4, 00:11:10.007 "base_bdevs_list": [ 00:11:10.007 { 00:11:10.007 "name": "pt1", 00:11:10.007 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.007 "is_configured": true, 00:11:10.007 "data_offset": 2048, 00:11:10.007 "data_size": 63488 00:11:10.007 }, 00:11:10.007 { 00:11:10.007 "name": "pt2", 00:11:10.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.007 "is_configured": true, 00:11:10.007 "data_offset": 2048, 00:11:10.007 "data_size": 63488 00:11:10.007 }, 00:11:10.007 { 00:11:10.007 "name": "pt3", 00:11:10.007 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.007 "is_configured": true, 00:11:10.007 "data_offset": 2048, 00:11:10.007 "data_size": 63488 00:11:10.007 }, 00:11:10.007 { 00:11:10.007 "name": "pt4", 00:11:10.007 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:10.007 "is_configured": true, 00:11:10.007 "data_offset": 2048, 00:11:10.007 "data_size": 63488 00:11:10.007 } 00:11:10.007 ] 00:11:10.007 } 00:11:10.007 } 00:11:10.007 }' 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:10.007 pt2 00:11:10.007 pt3 00:11:10.007 pt4' 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.007 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.268 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.268 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.268 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.268 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.268 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:10.268 07:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.268 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.268 07:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.268 [2024-11-29 07:43:00.103195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f5e6dc8d-1865-44db-8c30-9fe0181b43b4 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f5e6dc8d-1865-44db-8c30-9fe0181b43b4 ']' 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.268 [2024-11-29 07:43:00.146826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.268 [2024-11-29 07:43:00.146895] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.268 [2024-11-29 07:43:00.147010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.268 [2024-11-29 07:43:00.147113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.268 [2024-11-29 07:43:00.147169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.268 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.529 [2024-11-29 07:43:00.290547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:10.529 [2024-11-29 07:43:00.292417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:10.529 [2024-11-29 07:43:00.292523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:10.529 [2024-11-29 07:43:00.292561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:10.529 [2024-11-29 07:43:00.292610] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:10.529 [2024-11-29 07:43:00.292655] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:10.529 [2024-11-29 07:43:00.292673] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:10.529 [2024-11-29 07:43:00.292692] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:10.529 [2024-11-29 07:43:00.292704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.529 [2024-11-29 07:43:00.292717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:10.529 request: 00:11:10.529 { 00:11:10.529 "name": "raid_bdev1", 00:11:10.529 "raid_level": "raid0", 00:11:10.529 "base_bdevs": [ 00:11:10.529 "malloc1", 00:11:10.529 "malloc2", 00:11:10.529 "malloc3", 00:11:10.529 "malloc4" 00:11:10.529 ], 00:11:10.529 "strip_size_kb": 64, 00:11:10.529 "superblock": false, 00:11:10.529 "method": "bdev_raid_create", 00:11:10.529 "req_id": 1 00:11:10.529 } 00:11:10.529 Got JSON-RPC error response 00:11:10.529 response: 00:11:10.529 { 00:11:10.529 "code": -17, 00:11:10.529 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:10.529 } 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.529 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.529 [2024-11-29 07:43:00.358398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:10.529 [2024-11-29 07:43:00.358503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.529 [2024-11-29 07:43:00.358549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:10.529 [2024-11-29 07:43:00.358595] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.529 [2024-11-29 07:43:00.360786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.529 [2024-11-29 07:43:00.360864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:10.529 [2024-11-29 07:43:00.360965] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:10.529 [2024-11-29 07:43:00.361045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:10.530 pt1 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.530 "name": "raid_bdev1", 00:11:10.530 "uuid": "f5e6dc8d-1865-44db-8c30-9fe0181b43b4", 00:11:10.530 "strip_size_kb": 64, 00:11:10.530 "state": "configuring", 00:11:10.530 "raid_level": "raid0", 00:11:10.530 "superblock": true, 00:11:10.530 "num_base_bdevs": 4, 00:11:10.530 "num_base_bdevs_discovered": 1, 00:11:10.530 "num_base_bdevs_operational": 4, 00:11:10.530 "base_bdevs_list": [ 00:11:10.530 { 00:11:10.530 "name": "pt1", 00:11:10.530 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.530 "is_configured": true, 00:11:10.530 "data_offset": 2048, 00:11:10.530 "data_size": 63488 00:11:10.530 }, 00:11:10.530 { 00:11:10.530 "name": null, 00:11:10.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.530 "is_configured": false, 00:11:10.530 "data_offset": 2048, 00:11:10.530 "data_size": 63488 00:11:10.530 }, 00:11:10.530 { 00:11:10.530 "name": null, 00:11:10.530 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.530 "is_configured": false, 00:11:10.530 "data_offset": 2048, 00:11:10.530 "data_size": 63488 00:11:10.530 }, 00:11:10.530 { 00:11:10.530 "name": null, 00:11:10.530 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:10.530 "is_configured": false, 00:11:10.530 "data_offset": 2048, 00:11:10.530 "data_size": 63488 00:11:10.530 } 00:11:10.530 ] 00:11:10.530 }' 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.530 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.100 [2024-11-29 07:43:00.841623] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:11.100 [2024-11-29 07:43:00.841702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.100 [2024-11-29 07:43:00.841723] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:11.100 [2024-11-29 07:43:00.841734] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.100 [2024-11-29 07:43:00.842208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.100 [2024-11-29 07:43:00.842231] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:11.100 [2024-11-29 07:43:00.842320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:11.100 [2024-11-29 07:43:00.842345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:11.100 pt2 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.100 [2024-11-29 07:43:00.853593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.100 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.100 "name": "raid_bdev1", 00:11:11.100 "uuid": "f5e6dc8d-1865-44db-8c30-9fe0181b43b4", 00:11:11.100 "strip_size_kb": 64, 00:11:11.100 "state": "configuring", 00:11:11.100 "raid_level": "raid0", 00:11:11.100 "superblock": true, 00:11:11.100 "num_base_bdevs": 4, 00:11:11.100 "num_base_bdevs_discovered": 1, 00:11:11.100 "num_base_bdevs_operational": 4, 00:11:11.100 "base_bdevs_list": [ 00:11:11.100 { 00:11:11.100 "name": "pt1", 00:11:11.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.100 "is_configured": true, 00:11:11.100 "data_offset": 2048, 00:11:11.100 "data_size": 63488 00:11:11.100 }, 00:11:11.100 { 00:11:11.100 "name": null, 00:11:11.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.100 "is_configured": false, 00:11:11.100 "data_offset": 0, 00:11:11.100 "data_size": 63488 00:11:11.100 }, 00:11:11.100 { 00:11:11.100 "name": null, 00:11:11.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.100 "is_configured": false, 00:11:11.100 "data_offset": 2048, 00:11:11.101 "data_size": 63488 00:11:11.101 }, 00:11:11.101 { 00:11:11.101 "name": null, 00:11:11.101 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:11.101 "is_configured": false, 00:11:11.101 "data_offset": 2048, 00:11:11.101 "data_size": 63488 00:11:11.101 } 00:11:11.101 ] 00:11:11.101 }' 00:11:11.101 07:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.101 07:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.361 [2024-11-29 07:43:01.280889] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:11.361 [2024-11-29 07:43:01.280965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.361 [2024-11-29 07:43:01.280986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:11.361 [2024-11-29 07:43:01.280994] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.361 [2024-11-29 07:43:01.281471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.361 [2024-11-29 07:43:01.281495] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:11.361 [2024-11-29 07:43:01.281586] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:11.361 [2024-11-29 07:43:01.281608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:11.361 pt2 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.361 [2024-11-29 07:43:01.292878] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:11.361 [2024-11-29 07:43:01.292939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.361 [2024-11-29 07:43:01.292960] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:11.361 [2024-11-29 07:43:01.292968] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.361 [2024-11-29 07:43:01.293421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.361 [2024-11-29 07:43:01.293444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:11.361 [2024-11-29 07:43:01.293525] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:11.361 [2024-11-29 07:43:01.293579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:11.361 pt3 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.361 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.361 [2024-11-29 07:43:01.304803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:11.361 [2024-11-29 07:43:01.304851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.361 [2024-11-29 07:43:01.304869] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:11.361 [2024-11-29 07:43:01.304878] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.361 [2024-11-29 07:43:01.305307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.361 [2024-11-29 07:43:01.305324] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:11.621 [2024-11-29 07:43:01.305389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:11.621 [2024-11-29 07:43:01.305418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:11.621 [2024-11-29 07:43:01.305567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:11.621 [2024-11-29 07:43:01.305576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:11.621 [2024-11-29 07:43:01.305824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:11.621 [2024-11-29 07:43:01.305990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:11.621 [2024-11-29 07:43:01.306008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:11.621 [2024-11-29 07:43:01.306158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.621 pt4 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.621 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.621 "name": "raid_bdev1", 00:11:11.621 "uuid": "f5e6dc8d-1865-44db-8c30-9fe0181b43b4", 00:11:11.621 "strip_size_kb": 64, 00:11:11.621 "state": "online", 00:11:11.621 "raid_level": "raid0", 00:11:11.621 "superblock": true, 00:11:11.621 "num_base_bdevs": 4, 00:11:11.622 "num_base_bdevs_discovered": 4, 00:11:11.622 "num_base_bdevs_operational": 4, 00:11:11.622 "base_bdevs_list": [ 00:11:11.622 { 00:11:11.622 "name": "pt1", 00:11:11.622 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.622 "is_configured": true, 00:11:11.622 "data_offset": 2048, 00:11:11.622 "data_size": 63488 00:11:11.622 }, 00:11:11.622 { 00:11:11.622 "name": "pt2", 00:11:11.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.622 "is_configured": true, 00:11:11.622 "data_offset": 2048, 00:11:11.622 "data_size": 63488 00:11:11.622 }, 00:11:11.622 { 00:11:11.622 "name": "pt3", 00:11:11.622 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.622 "is_configured": true, 00:11:11.622 "data_offset": 2048, 00:11:11.622 "data_size": 63488 00:11:11.622 }, 00:11:11.622 { 00:11:11.622 "name": "pt4", 00:11:11.622 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:11.622 "is_configured": true, 00:11:11.622 "data_offset": 2048, 00:11:11.622 "data_size": 63488 00:11:11.622 } 00:11:11.622 ] 00:11:11.622 }' 00:11:11.622 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.622 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.882 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:11.882 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:11.882 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.882 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.882 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.882 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.882 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.882 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.882 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.882 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.882 [2024-11-29 07:43:01.780340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.882 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.882 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.882 "name": "raid_bdev1", 00:11:11.882 "aliases": [ 00:11:11.882 "f5e6dc8d-1865-44db-8c30-9fe0181b43b4" 00:11:11.882 ], 00:11:11.882 "product_name": "Raid Volume", 00:11:11.882 "block_size": 512, 00:11:11.882 "num_blocks": 253952, 00:11:11.882 "uuid": "f5e6dc8d-1865-44db-8c30-9fe0181b43b4", 00:11:11.882 "assigned_rate_limits": { 00:11:11.882 "rw_ios_per_sec": 0, 00:11:11.882 "rw_mbytes_per_sec": 0, 00:11:11.882 "r_mbytes_per_sec": 0, 00:11:11.882 "w_mbytes_per_sec": 0 00:11:11.882 }, 00:11:11.882 "claimed": false, 00:11:11.882 "zoned": false, 00:11:11.882 "supported_io_types": { 00:11:11.882 "read": true, 00:11:11.882 "write": true, 00:11:11.882 "unmap": true, 00:11:11.882 "flush": true, 00:11:11.882 "reset": true, 00:11:11.882 "nvme_admin": false, 00:11:11.882 "nvme_io": false, 00:11:11.882 "nvme_io_md": false, 00:11:11.882 "write_zeroes": true, 00:11:11.882 "zcopy": false, 00:11:11.882 "get_zone_info": false, 00:11:11.882 "zone_management": false, 00:11:11.882 "zone_append": false, 00:11:11.882 "compare": false, 00:11:11.882 "compare_and_write": false, 00:11:11.882 "abort": false, 00:11:11.882 "seek_hole": false, 00:11:11.882 "seek_data": false, 00:11:11.882 "copy": false, 00:11:11.882 "nvme_iov_md": false 00:11:11.882 }, 00:11:11.882 "memory_domains": [ 00:11:11.882 { 00:11:11.882 "dma_device_id": "system", 00:11:11.882 "dma_device_type": 1 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.882 "dma_device_type": 2 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "dma_device_id": "system", 00:11:11.882 "dma_device_type": 1 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.882 "dma_device_type": 2 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "dma_device_id": "system", 00:11:11.882 "dma_device_type": 1 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.882 "dma_device_type": 2 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "dma_device_id": "system", 00:11:11.882 "dma_device_type": 1 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.882 "dma_device_type": 2 00:11:11.882 } 00:11:11.882 ], 00:11:11.882 "driver_specific": { 00:11:11.882 "raid": { 00:11:11.882 "uuid": "f5e6dc8d-1865-44db-8c30-9fe0181b43b4", 00:11:11.882 "strip_size_kb": 64, 00:11:11.882 "state": "online", 00:11:11.882 "raid_level": "raid0", 00:11:11.882 "superblock": true, 00:11:11.882 "num_base_bdevs": 4, 00:11:11.882 "num_base_bdevs_discovered": 4, 00:11:11.882 "num_base_bdevs_operational": 4, 00:11:11.882 "base_bdevs_list": [ 00:11:11.882 { 00:11:11.882 "name": "pt1", 00:11:11.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.882 "is_configured": true, 00:11:11.882 "data_offset": 2048, 00:11:11.882 "data_size": 63488 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "name": "pt2", 00:11:11.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.882 "is_configured": true, 00:11:11.882 "data_offset": 2048, 00:11:11.882 "data_size": 63488 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "name": "pt3", 00:11:11.882 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.882 "is_configured": true, 00:11:11.883 "data_offset": 2048, 00:11:11.883 "data_size": 63488 00:11:11.883 }, 00:11:11.883 { 00:11:11.883 "name": "pt4", 00:11:11.883 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:11.883 "is_configured": true, 00:11:11.883 "data_offset": 2048, 00:11:11.883 "data_size": 63488 00:11:11.883 } 00:11:11.883 ] 00:11:11.883 } 00:11:11.883 } 00:11:11.883 }' 00:11:11.883 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.142 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:12.142 pt2 00:11:12.142 pt3 00:11:12.143 pt4' 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.143 07:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.143 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.143 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.143 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.143 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.143 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:12.143 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.143 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.143 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.143 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:12.402 [2024-11-29 07:43:02.103723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f5e6dc8d-1865-44db-8c30-9fe0181b43b4 '!=' f5e6dc8d-1865-44db-8c30-9fe0181b43b4 ']' 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70498 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70498 ']' 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70498 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70498 00:11:12.402 killing process with pid 70498 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70498' 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70498 00:11:12.402 [2024-11-29 07:43:02.187901] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.402 [2024-11-29 07:43:02.187987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.402 07:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70498 00:11:12.402 [2024-11-29 07:43:02.188059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.402 [2024-11-29 07:43:02.188068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:12.661 [2024-11-29 07:43:02.573078] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.040 07:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:14.040 00:11:14.040 real 0m5.571s 00:11:14.041 user 0m8.017s 00:11:14.041 sys 0m0.990s 00:11:14.041 07:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.041 07:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.041 ************************************ 00:11:14.041 END TEST raid_superblock_test 00:11:14.041 ************************************ 00:11:14.041 07:43:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:14.041 07:43:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:14.041 07:43:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.041 07:43:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.041 ************************************ 00:11:14.041 START TEST raid_read_error_test 00:11:14.041 ************************************ 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PVaRztJqeq 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70759 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70759 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70759 ']' 00:11:14.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.041 07:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.041 [2024-11-29 07:43:03.857584] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:14.041 [2024-11-29 07:43:03.857761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70759 ] 00:11:14.301 [2024-11-29 07:43:04.031163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.301 [2024-11-29 07:43:04.139265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.560 [2024-11-29 07:43:04.335446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.560 [2024-11-29 07:43:04.335537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.821 BaseBdev1_malloc 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.821 true 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.821 [2024-11-29 07:43:04.739286] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:14.821 [2024-11-29 07:43:04.739340] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.821 [2024-11-29 07:43:04.739376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:14.821 [2024-11-29 07:43:04.739387] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.821 [2024-11-29 07:43:04.741491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.821 [2024-11-29 07:43:04.741535] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:14.821 BaseBdev1 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.821 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.081 BaseBdev2_malloc 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.081 true 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.081 [2024-11-29 07:43:04.804521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:15.081 [2024-11-29 07:43:04.804572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.081 [2024-11-29 07:43:04.804588] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:15.081 [2024-11-29 07:43:04.804598] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.081 [2024-11-29 07:43:04.806595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.081 [2024-11-29 07:43:04.806635] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:15.081 BaseBdev2 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.081 BaseBdev3_malloc 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.081 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 true 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 [2024-11-29 07:43:04.882350] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:15.082 [2024-11-29 07:43:04.882409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.082 [2024-11-29 07:43:04.882441] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:15.082 [2024-11-29 07:43:04.882451] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.082 [2024-11-29 07:43:04.884497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.082 [2024-11-29 07:43:04.884614] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:15.082 BaseBdev3 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 BaseBdev4_malloc 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 true 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 [2024-11-29 07:43:04.947278] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:15.082 [2024-11-29 07:43:04.947330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.082 [2024-11-29 07:43:04.947363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:15.082 [2024-11-29 07:43:04.947374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.082 [2024-11-29 07:43:04.949457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.082 [2024-11-29 07:43:04.949500] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:15.082 BaseBdev4 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 [2024-11-29 07:43:04.959327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.082 [2024-11-29 07:43:04.961127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.082 [2024-11-29 07:43:04.961215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.082 [2024-11-29 07:43:04.961278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:15.082 [2024-11-29 07:43:04.961502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:15.082 [2024-11-29 07:43:04.961520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:15.082 [2024-11-29 07:43:04.961751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:15.082 [2024-11-29 07:43:04.961915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:15.082 [2024-11-29 07:43:04.961926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:15.082 [2024-11-29 07:43:04.962076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 07:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 07:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.082 "name": "raid_bdev1", 00:11:15.082 "uuid": "1ba2a530-fde7-4508-82d7-687263e57ed1", 00:11:15.082 "strip_size_kb": 64, 00:11:15.082 "state": "online", 00:11:15.082 "raid_level": "raid0", 00:11:15.082 "superblock": true, 00:11:15.082 "num_base_bdevs": 4, 00:11:15.082 "num_base_bdevs_discovered": 4, 00:11:15.082 "num_base_bdevs_operational": 4, 00:11:15.082 "base_bdevs_list": [ 00:11:15.082 { 00:11:15.082 "name": "BaseBdev1", 00:11:15.082 "uuid": "ea595f44-32d7-5fbe-a467-ba70923c7da2", 00:11:15.082 "is_configured": true, 00:11:15.082 "data_offset": 2048, 00:11:15.082 "data_size": 63488 00:11:15.082 }, 00:11:15.082 { 00:11:15.082 "name": "BaseBdev2", 00:11:15.082 "uuid": "03582026-6e36-5858-8c12-0e48dd352a22", 00:11:15.082 "is_configured": true, 00:11:15.082 "data_offset": 2048, 00:11:15.082 "data_size": 63488 00:11:15.082 }, 00:11:15.082 { 00:11:15.082 "name": "BaseBdev3", 00:11:15.082 "uuid": "897e5d55-0f3c-5945-99cf-ebf55526734c", 00:11:15.082 "is_configured": true, 00:11:15.082 "data_offset": 2048, 00:11:15.082 "data_size": 63488 00:11:15.082 }, 00:11:15.082 { 00:11:15.082 "name": "BaseBdev4", 00:11:15.082 "uuid": "c45cd3ad-f020-53ab-8453-eda221222261", 00:11:15.082 "is_configured": true, 00:11:15.082 "data_offset": 2048, 00:11:15.082 "data_size": 63488 00:11:15.082 } 00:11:15.082 ] 00:11:15.082 }' 00:11:15.082 07:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.082 07:43:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.657 07:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:15.657 07:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:15.657 [2024-11-29 07:43:05.515678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.634 "name": "raid_bdev1", 00:11:16.634 "uuid": "1ba2a530-fde7-4508-82d7-687263e57ed1", 00:11:16.634 "strip_size_kb": 64, 00:11:16.634 "state": "online", 00:11:16.634 "raid_level": "raid0", 00:11:16.634 "superblock": true, 00:11:16.634 "num_base_bdevs": 4, 00:11:16.634 "num_base_bdevs_discovered": 4, 00:11:16.634 "num_base_bdevs_operational": 4, 00:11:16.634 "base_bdevs_list": [ 00:11:16.634 { 00:11:16.634 "name": "BaseBdev1", 00:11:16.634 "uuid": "ea595f44-32d7-5fbe-a467-ba70923c7da2", 00:11:16.634 "is_configured": true, 00:11:16.634 "data_offset": 2048, 00:11:16.634 "data_size": 63488 00:11:16.634 }, 00:11:16.634 { 00:11:16.634 "name": "BaseBdev2", 00:11:16.634 "uuid": "03582026-6e36-5858-8c12-0e48dd352a22", 00:11:16.634 "is_configured": true, 00:11:16.634 "data_offset": 2048, 00:11:16.634 "data_size": 63488 00:11:16.634 }, 00:11:16.634 { 00:11:16.634 "name": "BaseBdev3", 00:11:16.634 "uuid": "897e5d55-0f3c-5945-99cf-ebf55526734c", 00:11:16.634 "is_configured": true, 00:11:16.634 "data_offset": 2048, 00:11:16.634 "data_size": 63488 00:11:16.634 }, 00:11:16.634 { 00:11:16.634 "name": "BaseBdev4", 00:11:16.634 "uuid": "c45cd3ad-f020-53ab-8453-eda221222261", 00:11:16.634 "is_configured": true, 00:11:16.634 "data_offset": 2048, 00:11:16.634 "data_size": 63488 00:11:16.634 } 00:11:16.634 ] 00:11:16.634 }' 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.634 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.204 [2024-11-29 07:43:06.912117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.204 [2024-11-29 07:43:06.912211] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.204 [2024-11-29 07:43:06.915174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.204 [2024-11-29 07:43:06.915276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.204 [2024-11-29 07:43:06.915343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.204 [2024-11-29 07:43:06.915416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:17.204 { 00:11:17.204 "results": [ 00:11:17.204 { 00:11:17.204 "job": "raid_bdev1", 00:11:17.204 "core_mask": "0x1", 00:11:17.204 "workload": "randrw", 00:11:17.204 "percentage": 50, 00:11:17.204 "status": "finished", 00:11:17.204 "queue_depth": 1, 00:11:17.204 "io_size": 131072, 00:11:17.204 "runtime": 1.397548, 00:11:17.204 "iops": 15474.244891767581, 00:11:17.204 "mibps": 1934.2806114709476, 00:11:17.204 "io_failed": 1, 00:11:17.204 "io_timeout": 0, 00:11:17.204 "avg_latency_us": 89.68876240943362, 00:11:17.204 "min_latency_us": 25.041048034934498, 00:11:17.204 "max_latency_us": 1509.6174672489083 00:11:17.204 } 00:11:17.204 ], 00:11:17.204 "core_count": 1 00:11:17.204 } 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70759 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70759 ']' 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70759 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70759 00:11:17.204 killing process with pid 70759 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70759' 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70759 00:11:17.204 [2024-11-29 07:43:06.963146] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.204 07:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70759 00:11:17.464 [2024-11-29 07:43:07.283781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.847 07:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PVaRztJqeq 00:11:18.847 07:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:18.847 07:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:18.847 07:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:18.847 07:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:18.847 ************************************ 00:11:18.847 END TEST raid_read_error_test 00:11:18.847 ************************************ 00:11:18.847 07:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:18.847 07:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:18.847 07:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:18.847 00:11:18.847 real 0m4.704s 00:11:18.847 user 0m5.556s 00:11:18.847 sys 0m0.580s 00:11:18.847 07:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.847 07:43:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.847 07:43:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:18.847 07:43:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:18.847 07:43:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.847 07:43:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.847 ************************************ 00:11:18.847 START TEST raid_write_error_test 00:11:18.847 ************************************ 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0hPxiehBfY 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70905 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70905 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70905 ']' 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.847 07:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.847 [2024-11-29 07:43:08.629480] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:18.847 [2024-11-29 07:43:08.629682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70905 ] 00:11:19.108 [2024-11-29 07:43:08.803565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.108 [2024-11-29 07:43:08.911031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.367 [2024-11-29 07:43:09.109803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.367 [2024-11-29 07:43:09.109869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.627 BaseBdev1_malloc 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.627 true 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.627 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.627 [2024-11-29 07:43:09.521631] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:19.627 [2024-11-29 07:43:09.521727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.628 [2024-11-29 07:43:09.521767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:19.628 [2024-11-29 07:43:09.521778] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.628 [2024-11-29 07:43:09.523871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.628 [2024-11-29 07:43:09.523914] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:19.628 BaseBdev1 00:11:19.628 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.628 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.628 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:19.628 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.628 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.628 BaseBdev2_malloc 00:11:19.628 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.628 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:19.628 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.628 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.889 true 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.889 [2024-11-29 07:43:09.587871] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:19.889 [2024-11-29 07:43:09.587931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.889 [2024-11-29 07:43:09.587949] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:19.889 [2024-11-29 07:43:09.587960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.889 [2024-11-29 07:43:09.590034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.889 [2024-11-29 07:43:09.590076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:19.889 BaseBdev2 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.889 BaseBdev3_malloc 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.889 true 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.889 [2024-11-29 07:43:09.665377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:19.889 [2024-11-29 07:43:09.665429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.889 [2024-11-29 07:43:09.665445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:19.889 [2024-11-29 07:43:09.665455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.889 [2024-11-29 07:43:09.667553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.889 [2024-11-29 07:43:09.667628] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:19.889 BaseBdev3 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.889 BaseBdev4_malloc 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.889 true 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.889 [2024-11-29 07:43:09.731149] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:19.889 [2024-11-29 07:43:09.731201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.889 [2024-11-29 07:43:09.731234] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:19.889 [2024-11-29 07:43:09.731245] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.889 [2024-11-29 07:43:09.733352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.889 [2024-11-29 07:43:09.733433] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:19.889 BaseBdev4 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.889 [2024-11-29 07:43:09.743193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.889 [2024-11-29 07:43:09.744985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.889 [2024-11-29 07:43:09.745113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.889 [2024-11-29 07:43:09.745180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:19.889 [2024-11-29 07:43:09.745395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:19.889 [2024-11-29 07:43:09.745413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:19.889 [2024-11-29 07:43:09.745641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:19.889 [2024-11-29 07:43:09.745786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:19.889 [2024-11-29 07:43:09.745797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:19.889 [2024-11-29 07:43:09.745941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.889 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.889 "name": "raid_bdev1", 00:11:19.889 "uuid": "f8828228-0d05-469c-8ad9-c9d37817f416", 00:11:19.889 "strip_size_kb": 64, 00:11:19.889 "state": "online", 00:11:19.889 "raid_level": "raid0", 00:11:19.889 "superblock": true, 00:11:19.889 "num_base_bdevs": 4, 00:11:19.889 "num_base_bdevs_discovered": 4, 00:11:19.889 "num_base_bdevs_operational": 4, 00:11:19.889 "base_bdevs_list": [ 00:11:19.889 { 00:11:19.889 "name": "BaseBdev1", 00:11:19.889 "uuid": "43ea7465-29ac-562d-8242-80f5e4780712", 00:11:19.889 "is_configured": true, 00:11:19.889 "data_offset": 2048, 00:11:19.889 "data_size": 63488 00:11:19.889 }, 00:11:19.889 { 00:11:19.889 "name": "BaseBdev2", 00:11:19.889 "uuid": "4fdc534d-ebfb-5c9e-ade7-a24723238750", 00:11:19.889 "is_configured": true, 00:11:19.889 "data_offset": 2048, 00:11:19.889 "data_size": 63488 00:11:19.889 }, 00:11:19.889 { 00:11:19.889 "name": "BaseBdev3", 00:11:19.889 "uuid": "50c4a6a7-8738-54c9-a891-70262c0f96fa", 00:11:19.890 "is_configured": true, 00:11:19.890 "data_offset": 2048, 00:11:19.890 "data_size": 63488 00:11:19.890 }, 00:11:19.890 { 00:11:19.890 "name": "BaseBdev4", 00:11:19.890 "uuid": "3952db7e-6d68-533b-9f4f-c8094cd25bd1", 00:11:19.890 "is_configured": true, 00:11:19.890 "data_offset": 2048, 00:11:19.890 "data_size": 63488 00:11:19.890 } 00:11:19.890 ] 00:11:19.890 }' 00:11:19.890 07:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.890 07:43:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.461 07:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:20.461 07:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:20.461 [2024-11-29 07:43:10.335371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.399 "name": "raid_bdev1", 00:11:21.399 "uuid": "f8828228-0d05-469c-8ad9-c9d37817f416", 00:11:21.399 "strip_size_kb": 64, 00:11:21.399 "state": "online", 00:11:21.399 "raid_level": "raid0", 00:11:21.399 "superblock": true, 00:11:21.399 "num_base_bdevs": 4, 00:11:21.399 "num_base_bdevs_discovered": 4, 00:11:21.399 "num_base_bdevs_operational": 4, 00:11:21.399 "base_bdevs_list": [ 00:11:21.399 { 00:11:21.399 "name": "BaseBdev1", 00:11:21.399 "uuid": "43ea7465-29ac-562d-8242-80f5e4780712", 00:11:21.399 "is_configured": true, 00:11:21.399 "data_offset": 2048, 00:11:21.399 "data_size": 63488 00:11:21.399 }, 00:11:21.399 { 00:11:21.399 "name": "BaseBdev2", 00:11:21.399 "uuid": "4fdc534d-ebfb-5c9e-ade7-a24723238750", 00:11:21.399 "is_configured": true, 00:11:21.399 "data_offset": 2048, 00:11:21.399 "data_size": 63488 00:11:21.399 }, 00:11:21.399 { 00:11:21.399 "name": "BaseBdev3", 00:11:21.399 "uuid": "50c4a6a7-8738-54c9-a891-70262c0f96fa", 00:11:21.399 "is_configured": true, 00:11:21.399 "data_offset": 2048, 00:11:21.399 "data_size": 63488 00:11:21.399 }, 00:11:21.399 { 00:11:21.399 "name": "BaseBdev4", 00:11:21.399 "uuid": "3952db7e-6d68-533b-9f4f-c8094cd25bd1", 00:11:21.399 "is_configured": true, 00:11:21.399 "data_offset": 2048, 00:11:21.399 "data_size": 63488 00:11:21.399 } 00:11:21.399 ] 00:11:21.399 }' 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.399 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.969 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:21.969 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.969 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.969 [2024-11-29 07:43:11.671303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.969 [2024-11-29 07:43:11.671413] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.969 [2024-11-29 07:43:11.674251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.969 [2024-11-29 07:43:11.674377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.969 [2024-11-29 07:43:11.674443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.969 [2024-11-29 07:43:11.674523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:21.969 { 00:11:21.969 "results": [ 00:11:21.969 { 00:11:21.969 "job": "raid_bdev1", 00:11:21.969 "core_mask": "0x1", 00:11:21.969 "workload": "randrw", 00:11:21.969 "percentage": 50, 00:11:21.969 "status": "finished", 00:11:21.969 "queue_depth": 1, 00:11:21.969 "io_size": 131072, 00:11:21.969 "runtime": 1.336718, 00:11:21.969 "iops": 15469.231356202281, 00:11:21.969 "mibps": 1933.6539195252851, 00:11:21.969 "io_failed": 1, 00:11:21.969 "io_timeout": 0, 00:11:21.969 "avg_latency_us": 89.6660827778999, 00:11:21.969 "min_latency_us": 26.1589519650655, 00:11:21.969 "max_latency_us": 1466.6899563318777 00:11:21.969 } 00:11:21.969 ], 00:11:21.969 "core_count": 1 00:11:21.969 } 00:11:21.969 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.969 07:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70905 00:11:21.969 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70905 ']' 00:11:21.969 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70905 00:11:21.969 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:21.969 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.969 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70905 00:11:21.969 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.969 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.970 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70905' 00:11:21.970 killing process with pid 70905 00:11:21.970 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70905 00:11:21.970 [2024-11-29 07:43:11.720186] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.970 07:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70905 00:11:22.229 [2024-11-29 07:43:12.043972] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.611 07:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0hPxiehBfY 00:11:23.611 07:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:23.611 07:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:23.611 07:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:23.611 07:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:23.611 07:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.611 07:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:23.611 07:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:23.611 00:11:23.611 real 0m4.697s 00:11:23.611 user 0m5.562s 00:11:23.611 sys 0m0.563s 00:11:23.611 07:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.611 ************************************ 00:11:23.611 END TEST raid_write_error_test 00:11:23.611 ************************************ 00:11:23.611 07:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.611 07:43:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:23.611 07:43:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:23.611 07:43:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:23.611 07:43:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.611 07:43:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.611 ************************************ 00:11:23.611 START TEST raid_state_function_test 00:11:23.611 ************************************ 00:11:23.611 07:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:23.611 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:23.611 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:23.611 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:23.611 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:23.611 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:23.611 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.611 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:23.611 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71054 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71054' 00:11:23.612 Process raid pid: 71054 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71054 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71054 ']' 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.612 07:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.612 [2024-11-29 07:43:13.386213] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:23.612 [2024-11-29 07:43:13.386404] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.872 [2024-11-29 07:43:13.560334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.872 [2024-11-29 07:43:13.671013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.132 [2024-11-29 07:43:13.874938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.132 [2024-11-29 07:43:13.875079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.393 [2024-11-29 07:43:14.226060] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.393 [2024-11-29 07:43:14.226192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.393 [2024-11-29 07:43:14.226209] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.393 [2024-11-29 07:43:14.226219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.393 [2024-11-29 07:43:14.226226] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.393 [2024-11-29 07:43:14.226235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.393 [2024-11-29 07:43:14.226242] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:24.393 [2024-11-29 07:43:14.226250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.393 "name": "Existed_Raid", 00:11:24.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.393 "strip_size_kb": 64, 00:11:24.393 "state": "configuring", 00:11:24.393 "raid_level": "concat", 00:11:24.393 "superblock": false, 00:11:24.393 "num_base_bdevs": 4, 00:11:24.393 "num_base_bdevs_discovered": 0, 00:11:24.393 "num_base_bdevs_operational": 4, 00:11:24.393 "base_bdevs_list": [ 00:11:24.393 { 00:11:24.393 "name": "BaseBdev1", 00:11:24.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.393 "is_configured": false, 00:11:24.393 "data_offset": 0, 00:11:24.393 "data_size": 0 00:11:24.393 }, 00:11:24.393 { 00:11:24.393 "name": "BaseBdev2", 00:11:24.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.393 "is_configured": false, 00:11:24.393 "data_offset": 0, 00:11:24.393 "data_size": 0 00:11:24.393 }, 00:11:24.393 { 00:11:24.393 "name": "BaseBdev3", 00:11:24.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.393 "is_configured": false, 00:11:24.393 "data_offset": 0, 00:11:24.393 "data_size": 0 00:11:24.393 }, 00:11:24.393 { 00:11:24.393 "name": "BaseBdev4", 00:11:24.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.393 "is_configured": false, 00:11:24.393 "data_offset": 0, 00:11:24.393 "data_size": 0 00:11:24.393 } 00:11:24.393 ] 00:11:24.393 }' 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.393 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.964 [2024-11-29 07:43:14.685248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.964 [2024-11-29 07:43:14.685358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.964 [2024-11-29 07:43:14.693220] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.964 [2024-11-29 07:43:14.693300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.964 [2024-11-29 07:43:14.693326] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.964 [2024-11-29 07:43:14.693348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.964 [2024-11-29 07:43:14.693366] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.964 [2024-11-29 07:43:14.693386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.964 [2024-11-29 07:43:14.693403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:24.964 [2024-11-29 07:43:14.693423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.964 [2024-11-29 07:43:14.741507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.964 BaseBdev1 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.964 [ 00:11:24.964 { 00:11:24.964 "name": "BaseBdev1", 00:11:24.964 "aliases": [ 00:11:24.964 "1201830e-67bd-4072-a72d-c7fbf7b76199" 00:11:24.964 ], 00:11:24.964 "product_name": "Malloc disk", 00:11:24.964 "block_size": 512, 00:11:24.964 "num_blocks": 65536, 00:11:24.964 "uuid": "1201830e-67bd-4072-a72d-c7fbf7b76199", 00:11:24.964 "assigned_rate_limits": { 00:11:24.964 "rw_ios_per_sec": 0, 00:11:24.964 "rw_mbytes_per_sec": 0, 00:11:24.964 "r_mbytes_per_sec": 0, 00:11:24.964 "w_mbytes_per_sec": 0 00:11:24.964 }, 00:11:24.964 "claimed": true, 00:11:24.964 "claim_type": "exclusive_write", 00:11:24.964 "zoned": false, 00:11:24.964 "supported_io_types": { 00:11:24.964 "read": true, 00:11:24.964 "write": true, 00:11:24.964 "unmap": true, 00:11:24.964 "flush": true, 00:11:24.964 "reset": true, 00:11:24.964 "nvme_admin": false, 00:11:24.964 "nvme_io": false, 00:11:24.964 "nvme_io_md": false, 00:11:24.964 "write_zeroes": true, 00:11:24.964 "zcopy": true, 00:11:24.964 "get_zone_info": false, 00:11:24.964 "zone_management": false, 00:11:24.964 "zone_append": false, 00:11:24.964 "compare": false, 00:11:24.964 "compare_and_write": false, 00:11:24.964 "abort": true, 00:11:24.964 "seek_hole": false, 00:11:24.964 "seek_data": false, 00:11:24.964 "copy": true, 00:11:24.964 "nvme_iov_md": false 00:11:24.964 }, 00:11:24.964 "memory_domains": [ 00:11:24.964 { 00:11:24.964 "dma_device_id": "system", 00:11:24.964 "dma_device_type": 1 00:11:24.964 }, 00:11:24.964 { 00:11:24.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.964 "dma_device_type": 2 00:11:24.964 } 00:11:24.964 ], 00:11:24.964 "driver_specific": {} 00:11:24.964 } 00:11:24.964 ] 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.964 "name": "Existed_Raid", 00:11:24.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.964 "strip_size_kb": 64, 00:11:24.964 "state": "configuring", 00:11:24.964 "raid_level": "concat", 00:11:24.964 "superblock": false, 00:11:24.964 "num_base_bdevs": 4, 00:11:24.964 "num_base_bdevs_discovered": 1, 00:11:24.964 "num_base_bdevs_operational": 4, 00:11:24.964 "base_bdevs_list": [ 00:11:24.964 { 00:11:24.964 "name": "BaseBdev1", 00:11:24.964 "uuid": "1201830e-67bd-4072-a72d-c7fbf7b76199", 00:11:24.964 "is_configured": true, 00:11:24.964 "data_offset": 0, 00:11:24.964 "data_size": 65536 00:11:24.964 }, 00:11:24.964 { 00:11:24.964 "name": "BaseBdev2", 00:11:24.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.964 "is_configured": false, 00:11:24.964 "data_offset": 0, 00:11:24.964 "data_size": 0 00:11:24.964 }, 00:11:24.964 { 00:11:24.964 "name": "BaseBdev3", 00:11:24.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.964 "is_configured": false, 00:11:24.964 "data_offset": 0, 00:11:24.964 "data_size": 0 00:11:24.964 }, 00:11:24.964 { 00:11:24.964 "name": "BaseBdev4", 00:11:24.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.964 "is_configured": false, 00:11:24.964 "data_offset": 0, 00:11:24.964 "data_size": 0 00:11:24.964 } 00:11:24.964 ] 00:11:24.964 }' 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.964 07:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.534 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.534 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.534 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.535 [2024-11-29 07:43:15.228730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.535 [2024-11-29 07:43:15.228787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.535 [2024-11-29 07:43:15.240773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.535 [2024-11-29 07:43:15.242550] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.535 [2024-11-29 07:43:15.242647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.535 [2024-11-29 07:43:15.242678] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:25.535 [2024-11-29 07:43:15.242689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:25.535 [2024-11-29 07:43:15.242696] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:25.535 [2024-11-29 07:43:15.242704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.535 "name": "Existed_Raid", 00:11:25.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.535 "strip_size_kb": 64, 00:11:25.535 "state": "configuring", 00:11:25.535 "raid_level": "concat", 00:11:25.535 "superblock": false, 00:11:25.535 "num_base_bdevs": 4, 00:11:25.535 "num_base_bdevs_discovered": 1, 00:11:25.535 "num_base_bdevs_operational": 4, 00:11:25.535 "base_bdevs_list": [ 00:11:25.535 { 00:11:25.535 "name": "BaseBdev1", 00:11:25.535 "uuid": "1201830e-67bd-4072-a72d-c7fbf7b76199", 00:11:25.535 "is_configured": true, 00:11:25.535 "data_offset": 0, 00:11:25.535 "data_size": 65536 00:11:25.535 }, 00:11:25.535 { 00:11:25.535 "name": "BaseBdev2", 00:11:25.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.535 "is_configured": false, 00:11:25.535 "data_offset": 0, 00:11:25.535 "data_size": 0 00:11:25.535 }, 00:11:25.535 { 00:11:25.535 "name": "BaseBdev3", 00:11:25.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.535 "is_configured": false, 00:11:25.535 "data_offset": 0, 00:11:25.535 "data_size": 0 00:11:25.535 }, 00:11:25.535 { 00:11:25.535 "name": "BaseBdev4", 00:11:25.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.535 "is_configured": false, 00:11:25.535 "data_offset": 0, 00:11:25.535 "data_size": 0 00:11:25.535 } 00:11:25.535 ] 00:11:25.535 }' 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.535 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.796 [2024-11-29 07:43:15.722060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.796 BaseBdev2 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.796 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.056 [ 00:11:26.056 { 00:11:26.056 "name": "BaseBdev2", 00:11:26.056 "aliases": [ 00:11:26.056 "be0babd5-641f-4a93-950f-3ad930810182" 00:11:26.056 ], 00:11:26.056 "product_name": "Malloc disk", 00:11:26.056 "block_size": 512, 00:11:26.056 "num_blocks": 65536, 00:11:26.056 "uuid": "be0babd5-641f-4a93-950f-3ad930810182", 00:11:26.056 "assigned_rate_limits": { 00:11:26.056 "rw_ios_per_sec": 0, 00:11:26.056 "rw_mbytes_per_sec": 0, 00:11:26.056 "r_mbytes_per_sec": 0, 00:11:26.057 "w_mbytes_per_sec": 0 00:11:26.057 }, 00:11:26.057 "claimed": true, 00:11:26.057 "claim_type": "exclusive_write", 00:11:26.057 "zoned": false, 00:11:26.057 "supported_io_types": { 00:11:26.057 "read": true, 00:11:26.057 "write": true, 00:11:26.057 "unmap": true, 00:11:26.057 "flush": true, 00:11:26.057 "reset": true, 00:11:26.057 "nvme_admin": false, 00:11:26.057 "nvme_io": false, 00:11:26.057 "nvme_io_md": false, 00:11:26.057 "write_zeroes": true, 00:11:26.057 "zcopy": true, 00:11:26.057 "get_zone_info": false, 00:11:26.057 "zone_management": false, 00:11:26.057 "zone_append": false, 00:11:26.057 "compare": false, 00:11:26.057 "compare_and_write": false, 00:11:26.057 "abort": true, 00:11:26.057 "seek_hole": false, 00:11:26.057 "seek_data": false, 00:11:26.057 "copy": true, 00:11:26.057 "nvme_iov_md": false 00:11:26.057 }, 00:11:26.057 "memory_domains": [ 00:11:26.057 { 00:11:26.057 "dma_device_id": "system", 00:11:26.057 "dma_device_type": 1 00:11:26.057 }, 00:11:26.057 { 00:11:26.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.057 "dma_device_type": 2 00:11:26.057 } 00:11:26.057 ], 00:11:26.057 "driver_specific": {} 00:11:26.057 } 00:11:26.057 ] 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.057 "name": "Existed_Raid", 00:11:26.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.057 "strip_size_kb": 64, 00:11:26.057 "state": "configuring", 00:11:26.057 "raid_level": "concat", 00:11:26.057 "superblock": false, 00:11:26.057 "num_base_bdevs": 4, 00:11:26.057 "num_base_bdevs_discovered": 2, 00:11:26.057 "num_base_bdevs_operational": 4, 00:11:26.057 "base_bdevs_list": [ 00:11:26.057 { 00:11:26.057 "name": "BaseBdev1", 00:11:26.057 "uuid": "1201830e-67bd-4072-a72d-c7fbf7b76199", 00:11:26.057 "is_configured": true, 00:11:26.057 "data_offset": 0, 00:11:26.057 "data_size": 65536 00:11:26.057 }, 00:11:26.057 { 00:11:26.057 "name": "BaseBdev2", 00:11:26.057 "uuid": "be0babd5-641f-4a93-950f-3ad930810182", 00:11:26.057 "is_configured": true, 00:11:26.057 "data_offset": 0, 00:11:26.057 "data_size": 65536 00:11:26.057 }, 00:11:26.057 { 00:11:26.057 "name": "BaseBdev3", 00:11:26.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.057 "is_configured": false, 00:11:26.057 "data_offset": 0, 00:11:26.057 "data_size": 0 00:11:26.057 }, 00:11:26.057 { 00:11:26.057 "name": "BaseBdev4", 00:11:26.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.057 "is_configured": false, 00:11:26.057 "data_offset": 0, 00:11:26.057 "data_size": 0 00:11:26.057 } 00:11:26.057 ] 00:11:26.057 }' 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.057 07:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.318 [2024-11-29 07:43:16.184130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.318 BaseBdev3 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.318 [ 00:11:26.318 { 00:11:26.318 "name": "BaseBdev3", 00:11:26.318 "aliases": [ 00:11:26.318 "641cb118-3c98-47d6-a36f-47acd41d78d7" 00:11:26.318 ], 00:11:26.318 "product_name": "Malloc disk", 00:11:26.318 "block_size": 512, 00:11:26.318 "num_blocks": 65536, 00:11:26.318 "uuid": "641cb118-3c98-47d6-a36f-47acd41d78d7", 00:11:26.318 "assigned_rate_limits": { 00:11:26.318 "rw_ios_per_sec": 0, 00:11:26.318 "rw_mbytes_per_sec": 0, 00:11:26.318 "r_mbytes_per_sec": 0, 00:11:26.318 "w_mbytes_per_sec": 0 00:11:26.318 }, 00:11:26.318 "claimed": true, 00:11:26.318 "claim_type": "exclusive_write", 00:11:26.318 "zoned": false, 00:11:26.318 "supported_io_types": { 00:11:26.318 "read": true, 00:11:26.318 "write": true, 00:11:26.318 "unmap": true, 00:11:26.318 "flush": true, 00:11:26.318 "reset": true, 00:11:26.318 "nvme_admin": false, 00:11:26.318 "nvme_io": false, 00:11:26.318 "nvme_io_md": false, 00:11:26.318 "write_zeroes": true, 00:11:26.318 "zcopy": true, 00:11:26.318 "get_zone_info": false, 00:11:26.318 "zone_management": false, 00:11:26.318 "zone_append": false, 00:11:26.318 "compare": false, 00:11:26.318 "compare_and_write": false, 00:11:26.318 "abort": true, 00:11:26.318 "seek_hole": false, 00:11:26.318 "seek_data": false, 00:11:26.318 "copy": true, 00:11:26.318 "nvme_iov_md": false 00:11:26.318 }, 00:11:26.318 "memory_domains": [ 00:11:26.318 { 00:11:26.318 "dma_device_id": "system", 00:11:26.318 "dma_device_type": 1 00:11:26.318 }, 00:11:26.318 { 00:11:26.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.318 "dma_device_type": 2 00:11:26.318 } 00:11:26.318 ], 00:11:26.318 "driver_specific": {} 00:11:26.318 } 00:11:26.318 ] 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.318 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.578 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.578 "name": "Existed_Raid", 00:11:26.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.578 "strip_size_kb": 64, 00:11:26.578 "state": "configuring", 00:11:26.578 "raid_level": "concat", 00:11:26.578 "superblock": false, 00:11:26.578 "num_base_bdevs": 4, 00:11:26.578 "num_base_bdevs_discovered": 3, 00:11:26.578 "num_base_bdevs_operational": 4, 00:11:26.578 "base_bdevs_list": [ 00:11:26.578 { 00:11:26.578 "name": "BaseBdev1", 00:11:26.578 "uuid": "1201830e-67bd-4072-a72d-c7fbf7b76199", 00:11:26.578 "is_configured": true, 00:11:26.578 "data_offset": 0, 00:11:26.578 "data_size": 65536 00:11:26.578 }, 00:11:26.578 { 00:11:26.578 "name": "BaseBdev2", 00:11:26.578 "uuid": "be0babd5-641f-4a93-950f-3ad930810182", 00:11:26.578 "is_configured": true, 00:11:26.578 "data_offset": 0, 00:11:26.578 "data_size": 65536 00:11:26.578 }, 00:11:26.578 { 00:11:26.578 "name": "BaseBdev3", 00:11:26.578 "uuid": "641cb118-3c98-47d6-a36f-47acd41d78d7", 00:11:26.578 "is_configured": true, 00:11:26.578 "data_offset": 0, 00:11:26.578 "data_size": 65536 00:11:26.578 }, 00:11:26.578 { 00:11:26.578 "name": "BaseBdev4", 00:11:26.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.578 "is_configured": false, 00:11:26.578 "data_offset": 0, 00:11:26.578 "data_size": 0 00:11:26.578 } 00:11:26.578 ] 00:11:26.578 }' 00:11:26.578 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.578 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.839 [2024-11-29 07:43:16.713139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.839 [2024-11-29 07:43:16.713256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.839 [2024-11-29 07:43:16.713269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:26.839 [2024-11-29 07:43:16.713554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:26.839 [2024-11-29 07:43:16.713719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.839 [2024-11-29 07:43:16.713730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:26.839 [2024-11-29 07:43:16.713985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.839 BaseBdev4 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.839 [ 00:11:26.839 { 00:11:26.839 "name": "BaseBdev4", 00:11:26.839 "aliases": [ 00:11:26.839 "547508e5-b304-4e85-b20a-4a76d35c50cc" 00:11:26.839 ], 00:11:26.839 "product_name": "Malloc disk", 00:11:26.839 "block_size": 512, 00:11:26.839 "num_blocks": 65536, 00:11:26.839 "uuid": "547508e5-b304-4e85-b20a-4a76d35c50cc", 00:11:26.839 "assigned_rate_limits": { 00:11:26.839 "rw_ios_per_sec": 0, 00:11:26.839 "rw_mbytes_per_sec": 0, 00:11:26.839 "r_mbytes_per_sec": 0, 00:11:26.839 "w_mbytes_per_sec": 0 00:11:26.839 }, 00:11:26.839 "claimed": true, 00:11:26.839 "claim_type": "exclusive_write", 00:11:26.839 "zoned": false, 00:11:26.839 "supported_io_types": { 00:11:26.839 "read": true, 00:11:26.839 "write": true, 00:11:26.839 "unmap": true, 00:11:26.839 "flush": true, 00:11:26.839 "reset": true, 00:11:26.839 "nvme_admin": false, 00:11:26.839 "nvme_io": false, 00:11:26.839 "nvme_io_md": false, 00:11:26.839 "write_zeroes": true, 00:11:26.839 "zcopy": true, 00:11:26.839 "get_zone_info": false, 00:11:26.839 "zone_management": false, 00:11:26.839 "zone_append": false, 00:11:26.839 "compare": false, 00:11:26.839 "compare_and_write": false, 00:11:26.839 "abort": true, 00:11:26.839 "seek_hole": false, 00:11:26.839 "seek_data": false, 00:11:26.839 "copy": true, 00:11:26.839 "nvme_iov_md": false 00:11:26.839 }, 00:11:26.839 "memory_domains": [ 00:11:26.839 { 00:11:26.839 "dma_device_id": "system", 00:11:26.839 "dma_device_type": 1 00:11:26.839 }, 00:11:26.839 { 00:11:26.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.839 "dma_device_type": 2 00:11:26.839 } 00:11:26.839 ], 00:11:26.839 "driver_specific": {} 00:11:26.839 } 00:11:26.839 ] 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.839 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.099 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.099 "name": "Existed_Raid", 00:11:27.099 "uuid": "1582bf56-0fa6-42ef-80a5-870fbb50ec95", 00:11:27.099 "strip_size_kb": 64, 00:11:27.099 "state": "online", 00:11:27.099 "raid_level": "concat", 00:11:27.099 "superblock": false, 00:11:27.099 "num_base_bdevs": 4, 00:11:27.099 "num_base_bdevs_discovered": 4, 00:11:27.099 "num_base_bdevs_operational": 4, 00:11:27.099 "base_bdevs_list": [ 00:11:27.099 { 00:11:27.099 "name": "BaseBdev1", 00:11:27.099 "uuid": "1201830e-67bd-4072-a72d-c7fbf7b76199", 00:11:27.099 "is_configured": true, 00:11:27.099 "data_offset": 0, 00:11:27.099 "data_size": 65536 00:11:27.099 }, 00:11:27.099 { 00:11:27.099 "name": "BaseBdev2", 00:11:27.099 "uuid": "be0babd5-641f-4a93-950f-3ad930810182", 00:11:27.099 "is_configured": true, 00:11:27.099 "data_offset": 0, 00:11:27.099 "data_size": 65536 00:11:27.099 }, 00:11:27.099 { 00:11:27.099 "name": "BaseBdev3", 00:11:27.099 "uuid": "641cb118-3c98-47d6-a36f-47acd41d78d7", 00:11:27.099 "is_configured": true, 00:11:27.099 "data_offset": 0, 00:11:27.099 "data_size": 65536 00:11:27.099 }, 00:11:27.100 { 00:11:27.100 "name": "BaseBdev4", 00:11:27.100 "uuid": "547508e5-b304-4e85-b20a-4a76d35c50cc", 00:11:27.100 "is_configured": true, 00:11:27.100 "data_offset": 0, 00:11:27.100 "data_size": 65536 00:11:27.100 } 00:11:27.100 ] 00:11:27.100 }' 00:11:27.100 07:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.100 07:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.360 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:27.360 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:27.360 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:27.360 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:27.360 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:27.360 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:27.360 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:27.360 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:27.360 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.360 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.360 [2024-11-29 07:43:17.196729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.361 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.361 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.361 "name": "Existed_Raid", 00:11:27.361 "aliases": [ 00:11:27.361 "1582bf56-0fa6-42ef-80a5-870fbb50ec95" 00:11:27.361 ], 00:11:27.361 "product_name": "Raid Volume", 00:11:27.361 "block_size": 512, 00:11:27.361 "num_blocks": 262144, 00:11:27.361 "uuid": "1582bf56-0fa6-42ef-80a5-870fbb50ec95", 00:11:27.361 "assigned_rate_limits": { 00:11:27.361 "rw_ios_per_sec": 0, 00:11:27.361 "rw_mbytes_per_sec": 0, 00:11:27.361 "r_mbytes_per_sec": 0, 00:11:27.361 "w_mbytes_per_sec": 0 00:11:27.361 }, 00:11:27.361 "claimed": false, 00:11:27.361 "zoned": false, 00:11:27.361 "supported_io_types": { 00:11:27.361 "read": true, 00:11:27.361 "write": true, 00:11:27.361 "unmap": true, 00:11:27.361 "flush": true, 00:11:27.361 "reset": true, 00:11:27.361 "nvme_admin": false, 00:11:27.361 "nvme_io": false, 00:11:27.361 "nvme_io_md": false, 00:11:27.361 "write_zeroes": true, 00:11:27.361 "zcopy": false, 00:11:27.361 "get_zone_info": false, 00:11:27.361 "zone_management": false, 00:11:27.361 "zone_append": false, 00:11:27.361 "compare": false, 00:11:27.361 "compare_and_write": false, 00:11:27.361 "abort": false, 00:11:27.361 "seek_hole": false, 00:11:27.361 "seek_data": false, 00:11:27.361 "copy": false, 00:11:27.361 "nvme_iov_md": false 00:11:27.361 }, 00:11:27.361 "memory_domains": [ 00:11:27.361 { 00:11:27.361 "dma_device_id": "system", 00:11:27.361 "dma_device_type": 1 00:11:27.361 }, 00:11:27.361 { 00:11:27.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.361 "dma_device_type": 2 00:11:27.361 }, 00:11:27.361 { 00:11:27.361 "dma_device_id": "system", 00:11:27.361 "dma_device_type": 1 00:11:27.361 }, 00:11:27.361 { 00:11:27.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.361 "dma_device_type": 2 00:11:27.361 }, 00:11:27.361 { 00:11:27.361 "dma_device_id": "system", 00:11:27.361 "dma_device_type": 1 00:11:27.361 }, 00:11:27.361 { 00:11:27.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.361 "dma_device_type": 2 00:11:27.361 }, 00:11:27.361 { 00:11:27.361 "dma_device_id": "system", 00:11:27.361 "dma_device_type": 1 00:11:27.361 }, 00:11:27.361 { 00:11:27.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.361 "dma_device_type": 2 00:11:27.361 } 00:11:27.361 ], 00:11:27.361 "driver_specific": { 00:11:27.361 "raid": { 00:11:27.361 "uuid": "1582bf56-0fa6-42ef-80a5-870fbb50ec95", 00:11:27.361 "strip_size_kb": 64, 00:11:27.361 "state": "online", 00:11:27.361 "raid_level": "concat", 00:11:27.361 "superblock": false, 00:11:27.361 "num_base_bdevs": 4, 00:11:27.361 "num_base_bdevs_discovered": 4, 00:11:27.361 "num_base_bdevs_operational": 4, 00:11:27.361 "base_bdevs_list": [ 00:11:27.361 { 00:11:27.361 "name": "BaseBdev1", 00:11:27.361 "uuid": "1201830e-67bd-4072-a72d-c7fbf7b76199", 00:11:27.361 "is_configured": true, 00:11:27.361 "data_offset": 0, 00:11:27.361 "data_size": 65536 00:11:27.361 }, 00:11:27.361 { 00:11:27.361 "name": "BaseBdev2", 00:11:27.361 "uuid": "be0babd5-641f-4a93-950f-3ad930810182", 00:11:27.361 "is_configured": true, 00:11:27.361 "data_offset": 0, 00:11:27.361 "data_size": 65536 00:11:27.361 }, 00:11:27.361 { 00:11:27.361 "name": "BaseBdev3", 00:11:27.361 "uuid": "641cb118-3c98-47d6-a36f-47acd41d78d7", 00:11:27.361 "is_configured": true, 00:11:27.361 "data_offset": 0, 00:11:27.361 "data_size": 65536 00:11:27.361 }, 00:11:27.361 { 00:11:27.361 "name": "BaseBdev4", 00:11:27.361 "uuid": "547508e5-b304-4e85-b20a-4a76d35c50cc", 00:11:27.361 "is_configured": true, 00:11:27.361 "data_offset": 0, 00:11:27.361 "data_size": 65536 00:11:27.361 } 00:11:27.361 ] 00:11:27.361 } 00:11:27.361 } 00:11:27.361 }' 00:11:27.361 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.361 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:27.361 BaseBdev2 00:11:27.361 BaseBdev3 00:11:27.361 BaseBdev4' 00:11:27.361 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.361 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.361 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.361 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:27.361 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.361 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.361 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.621 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.621 [2024-11-29 07:43:17.487909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.621 [2024-11-29 07:43:17.487940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.621 [2024-11-29 07:43:17.487992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.881 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.881 "name": "Existed_Raid", 00:11:27.881 "uuid": "1582bf56-0fa6-42ef-80a5-870fbb50ec95", 00:11:27.881 "strip_size_kb": 64, 00:11:27.881 "state": "offline", 00:11:27.881 "raid_level": "concat", 00:11:27.881 "superblock": false, 00:11:27.882 "num_base_bdevs": 4, 00:11:27.882 "num_base_bdevs_discovered": 3, 00:11:27.882 "num_base_bdevs_operational": 3, 00:11:27.882 "base_bdevs_list": [ 00:11:27.882 { 00:11:27.882 "name": null, 00:11:27.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.882 "is_configured": false, 00:11:27.882 "data_offset": 0, 00:11:27.882 "data_size": 65536 00:11:27.882 }, 00:11:27.882 { 00:11:27.882 "name": "BaseBdev2", 00:11:27.882 "uuid": "be0babd5-641f-4a93-950f-3ad930810182", 00:11:27.882 "is_configured": true, 00:11:27.882 "data_offset": 0, 00:11:27.882 "data_size": 65536 00:11:27.882 }, 00:11:27.882 { 00:11:27.882 "name": "BaseBdev3", 00:11:27.882 "uuid": "641cb118-3c98-47d6-a36f-47acd41d78d7", 00:11:27.882 "is_configured": true, 00:11:27.882 "data_offset": 0, 00:11:27.882 "data_size": 65536 00:11:27.882 }, 00:11:27.882 { 00:11:27.882 "name": "BaseBdev4", 00:11:27.882 "uuid": "547508e5-b304-4e85-b20a-4a76d35c50cc", 00:11:27.882 "is_configured": true, 00:11:27.882 "data_offset": 0, 00:11:27.882 "data_size": 65536 00:11:27.882 } 00:11:27.882 ] 00:11:27.882 }' 00:11:27.882 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.882 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.142 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:28.142 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.142 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.142 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.142 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.142 07:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:28.142 07:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.142 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:28.142 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:28.142 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:28.142 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.142 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.142 [2024-11-29 07:43:18.014673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.402 [2024-11-29 07:43:18.166768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.402 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.402 [2024-11-29 07:43:18.319768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:28.402 [2024-11-29 07:43:18.319816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 BaseBdev2 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 [ 00:11:28.662 { 00:11:28.662 "name": "BaseBdev2", 00:11:28.662 "aliases": [ 00:11:28.662 "0f5416b7-f8f4-4e05-8c47-34a480e113e7" 00:11:28.662 ], 00:11:28.662 "product_name": "Malloc disk", 00:11:28.662 "block_size": 512, 00:11:28.662 "num_blocks": 65536, 00:11:28.662 "uuid": "0f5416b7-f8f4-4e05-8c47-34a480e113e7", 00:11:28.662 "assigned_rate_limits": { 00:11:28.662 "rw_ios_per_sec": 0, 00:11:28.662 "rw_mbytes_per_sec": 0, 00:11:28.662 "r_mbytes_per_sec": 0, 00:11:28.662 "w_mbytes_per_sec": 0 00:11:28.662 }, 00:11:28.662 "claimed": false, 00:11:28.662 "zoned": false, 00:11:28.662 "supported_io_types": { 00:11:28.662 "read": true, 00:11:28.662 "write": true, 00:11:28.662 "unmap": true, 00:11:28.662 "flush": true, 00:11:28.662 "reset": true, 00:11:28.662 "nvme_admin": false, 00:11:28.662 "nvme_io": false, 00:11:28.662 "nvme_io_md": false, 00:11:28.662 "write_zeroes": true, 00:11:28.662 "zcopy": true, 00:11:28.662 "get_zone_info": false, 00:11:28.662 "zone_management": false, 00:11:28.662 "zone_append": false, 00:11:28.662 "compare": false, 00:11:28.662 "compare_and_write": false, 00:11:28.662 "abort": true, 00:11:28.662 "seek_hole": false, 00:11:28.662 "seek_data": false, 00:11:28.662 "copy": true, 00:11:28.662 "nvme_iov_md": false 00:11:28.662 }, 00:11:28.662 "memory_domains": [ 00:11:28.662 { 00:11:28.662 "dma_device_id": "system", 00:11:28.662 "dma_device_type": 1 00:11:28.662 }, 00:11:28.662 { 00:11:28.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.662 "dma_device_type": 2 00:11:28.662 } 00:11:28.662 ], 00:11:28.662 "driver_specific": {} 00:11:28.662 } 00:11:28.662 ] 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 BaseBdev3 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.662 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.923 [ 00:11:28.923 { 00:11:28.923 "name": "BaseBdev3", 00:11:28.923 "aliases": [ 00:11:28.923 "a939dc50-b505-45ca-a92d-b9bf40335513" 00:11:28.923 ], 00:11:28.923 "product_name": "Malloc disk", 00:11:28.923 "block_size": 512, 00:11:28.923 "num_blocks": 65536, 00:11:28.923 "uuid": "a939dc50-b505-45ca-a92d-b9bf40335513", 00:11:28.923 "assigned_rate_limits": { 00:11:28.923 "rw_ios_per_sec": 0, 00:11:28.923 "rw_mbytes_per_sec": 0, 00:11:28.923 "r_mbytes_per_sec": 0, 00:11:28.923 "w_mbytes_per_sec": 0 00:11:28.923 }, 00:11:28.923 "claimed": false, 00:11:28.923 "zoned": false, 00:11:28.923 "supported_io_types": { 00:11:28.923 "read": true, 00:11:28.923 "write": true, 00:11:28.923 "unmap": true, 00:11:28.923 "flush": true, 00:11:28.923 "reset": true, 00:11:28.923 "nvme_admin": false, 00:11:28.923 "nvme_io": false, 00:11:28.923 "nvme_io_md": false, 00:11:28.923 "write_zeroes": true, 00:11:28.923 "zcopy": true, 00:11:28.923 "get_zone_info": false, 00:11:28.923 "zone_management": false, 00:11:28.923 "zone_append": false, 00:11:28.923 "compare": false, 00:11:28.923 "compare_and_write": false, 00:11:28.923 "abort": true, 00:11:28.923 "seek_hole": false, 00:11:28.923 "seek_data": false, 00:11:28.923 "copy": true, 00:11:28.923 "nvme_iov_md": false 00:11:28.923 }, 00:11:28.923 "memory_domains": [ 00:11:28.923 { 00:11:28.923 "dma_device_id": "system", 00:11:28.923 "dma_device_type": 1 00:11:28.923 }, 00:11:28.923 { 00:11:28.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.923 "dma_device_type": 2 00:11:28.923 } 00:11:28.923 ], 00:11:28.923 "driver_specific": {} 00:11:28.923 } 00:11:28.923 ] 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.923 BaseBdev4 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.923 [ 00:11:28.923 { 00:11:28.923 "name": "BaseBdev4", 00:11:28.923 "aliases": [ 00:11:28.923 "85428dae-3fc3-40af-8191-cb5f14a40f04" 00:11:28.923 ], 00:11:28.923 "product_name": "Malloc disk", 00:11:28.923 "block_size": 512, 00:11:28.923 "num_blocks": 65536, 00:11:28.923 "uuid": "85428dae-3fc3-40af-8191-cb5f14a40f04", 00:11:28.923 "assigned_rate_limits": { 00:11:28.923 "rw_ios_per_sec": 0, 00:11:28.923 "rw_mbytes_per_sec": 0, 00:11:28.923 "r_mbytes_per_sec": 0, 00:11:28.923 "w_mbytes_per_sec": 0 00:11:28.923 }, 00:11:28.923 "claimed": false, 00:11:28.923 "zoned": false, 00:11:28.923 "supported_io_types": { 00:11:28.923 "read": true, 00:11:28.923 "write": true, 00:11:28.923 "unmap": true, 00:11:28.923 "flush": true, 00:11:28.923 "reset": true, 00:11:28.923 "nvme_admin": false, 00:11:28.923 "nvme_io": false, 00:11:28.923 "nvme_io_md": false, 00:11:28.923 "write_zeroes": true, 00:11:28.923 "zcopy": true, 00:11:28.923 "get_zone_info": false, 00:11:28.923 "zone_management": false, 00:11:28.923 "zone_append": false, 00:11:28.923 "compare": false, 00:11:28.923 "compare_and_write": false, 00:11:28.923 "abort": true, 00:11:28.923 "seek_hole": false, 00:11:28.923 "seek_data": false, 00:11:28.923 "copy": true, 00:11:28.923 "nvme_iov_md": false 00:11:28.923 }, 00:11:28.923 "memory_domains": [ 00:11:28.923 { 00:11:28.923 "dma_device_id": "system", 00:11:28.923 "dma_device_type": 1 00:11:28.923 }, 00:11:28.923 { 00:11:28.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.923 "dma_device_type": 2 00:11:28.923 } 00:11:28.923 ], 00:11:28.923 "driver_specific": {} 00:11:28.923 } 00:11:28.923 ] 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.923 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.924 [2024-11-29 07:43:18.710292] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.924 [2024-11-29 07:43:18.710377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.924 [2024-11-29 07:43:18.710418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.924 [2024-11-29 07:43:18.712229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.924 [2024-11-29 07:43:18.712333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.924 "name": "Existed_Raid", 00:11:28.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.924 "strip_size_kb": 64, 00:11:28.924 "state": "configuring", 00:11:28.924 "raid_level": "concat", 00:11:28.924 "superblock": false, 00:11:28.924 "num_base_bdevs": 4, 00:11:28.924 "num_base_bdevs_discovered": 3, 00:11:28.924 "num_base_bdevs_operational": 4, 00:11:28.924 "base_bdevs_list": [ 00:11:28.924 { 00:11:28.924 "name": "BaseBdev1", 00:11:28.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.924 "is_configured": false, 00:11:28.924 "data_offset": 0, 00:11:28.924 "data_size": 0 00:11:28.924 }, 00:11:28.924 { 00:11:28.924 "name": "BaseBdev2", 00:11:28.924 "uuid": "0f5416b7-f8f4-4e05-8c47-34a480e113e7", 00:11:28.924 "is_configured": true, 00:11:28.924 "data_offset": 0, 00:11:28.924 "data_size": 65536 00:11:28.924 }, 00:11:28.924 { 00:11:28.924 "name": "BaseBdev3", 00:11:28.924 "uuid": "a939dc50-b505-45ca-a92d-b9bf40335513", 00:11:28.924 "is_configured": true, 00:11:28.924 "data_offset": 0, 00:11:28.924 "data_size": 65536 00:11:28.924 }, 00:11:28.924 { 00:11:28.924 "name": "BaseBdev4", 00:11:28.924 "uuid": "85428dae-3fc3-40af-8191-cb5f14a40f04", 00:11:28.924 "is_configured": true, 00:11:28.924 "data_offset": 0, 00:11:28.924 "data_size": 65536 00:11:28.924 } 00:11:28.924 ] 00:11:28.924 }' 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.924 07:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.494 [2024-11-29 07:43:19.157556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.494 "name": "Existed_Raid", 00:11:29.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.494 "strip_size_kb": 64, 00:11:29.494 "state": "configuring", 00:11:29.494 "raid_level": "concat", 00:11:29.494 "superblock": false, 00:11:29.494 "num_base_bdevs": 4, 00:11:29.494 "num_base_bdevs_discovered": 2, 00:11:29.494 "num_base_bdevs_operational": 4, 00:11:29.494 "base_bdevs_list": [ 00:11:29.494 { 00:11:29.494 "name": "BaseBdev1", 00:11:29.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.494 "is_configured": false, 00:11:29.494 "data_offset": 0, 00:11:29.494 "data_size": 0 00:11:29.494 }, 00:11:29.494 { 00:11:29.494 "name": null, 00:11:29.494 "uuid": "0f5416b7-f8f4-4e05-8c47-34a480e113e7", 00:11:29.494 "is_configured": false, 00:11:29.494 "data_offset": 0, 00:11:29.494 "data_size": 65536 00:11:29.494 }, 00:11:29.494 { 00:11:29.494 "name": "BaseBdev3", 00:11:29.494 "uuid": "a939dc50-b505-45ca-a92d-b9bf40335513", 00:11:29.494 "is_configured": true, 00:11:29.494 "data_offset": 0, 00:11:29.494 "data_size": 65536 00:11:29.494 }, 00:11:29.494 { 00:11:29.494 "name": "BaseBdev4", 00:11:29.494 "uuid": "85428dae-3fc3-40af-8191-cb5f14a40f04", 00:11:29.494 "is_configured": true, 00:11:29.494 "data_offset": 0, 00:11:29.494 "data_size": 65536 00:11:29.494 } 00:11:29.494 ] 00:11:29.494 }' 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.494 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.754 [2024-11-29 07:43:19.656409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.754 BaseBdev1 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.754 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.754 [ 00:11:29.754 { 00:11:29.754 "name": "BaseBdev1", 00:11:29.754 "aliases": [ 00:11:29.754 "16724f34-2ebd-45cf-bfce-5b29ea60ce5d" 00:11:29.754 ], 00:11:29.754 "product_name": "Malloc disk", 00:11:29.754 "block_size": 512, 00:11:29.754 "num_blocks": 65536, 00:11:29.754 "uuid": "16724f34-2ebd-45cf-bfce-5b29ea60ce5d", 00:11:29.754 "assigned_rate_limits": { 00:11:29.754 "rw_ios_per_sec": 0, 00:11:29.754 "rw_mbytes_per_sec": 0, 00:11:29.754 "r_mbytes_per_sec": 0, 00:11:29.754 "w_mbytes_per_sec": 0 00:11:29.754 }, 00:11:29.754 "claimed": true, 00:11:29.754 "claim_type": "exclusive_write", 00:11:29.754 "zoned": false, 00:11:29.754 "supported_io_types": { 00:11:29.754 "read": true, 00:11:29.754 "write": true, 00:11:29.754 "unmap": true, 00:11:29.754 "flush": true, 00:11:29.754 "reset": true, 00:11:29.754 "nvme_admin": false, 00:11:29.754 "nvme_io": false, 00:11:29.754 "nvme_io_md": false, 00:11:29.754 "write_zeroes": true, 00:11:29.754 "zcopy": true, 00:11:29.754 "get_zone_info": false, 00:11:29.754 "zone_management": false, 00:11:29.754 "zone_append": false, 00:11:29.754 "compare": false, 00:11:29.754 "compare_and_write": false, 00:11:29.754 "abort": true, 00:11:29.754 "seek_hole": false, 00:11:29.754 "seek_data": false, 00:11:29.754 "copy": true, 00:11:29.754 "nvme_iov_md": false 00:11:29.754 }, 00:11:29.754 "memory_domains": [ 00:11:29.754 { 00:11:30.014 "dma_device_id": "system", 00:11:30.014 "dma_device_type": 1 00:11:30.014 }, 00:11:30.014 { 00:11:30.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.014 "dma_device_type": 2 00:11:30.014 } 00:11:30.014 ], 00:11:30.014 "driver_specific": {} 00:11:30.014 } 00:11:30.014 ] 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.014 "name": "Existed_Raid", 00:11:30.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.014 "strip_size_kb": 64, 00:11:30.014 "state": "configuring", 00:11:30.014 "raid_level": "concat", 00:11:30.014 "superblock": false, 00:11:30.014 "num_base_bdevs": 4, 00:11:30.014 "num_base_bdevs_discovered": 3, 00:11:30.014 "num_base_bdevs_operational": 4, 00:11:30.014 "base_bdevs_list": [ 00:11:30.014 { 00:11:30.014 "name": "BaseBdev1", 00:11:30.014 "uuid": "16724f34-2ebd-45cf-bfce-5b29ea60ce5d", 00:11:30.014 "is_configured": true, 00:11:30.014 "data_offset": 0, 00:11:30.014 "data_size": 65536 00:11:30.014 }, 00:11:30.014 { 00:11:30.014 "name": null, 00:11:30.014 "uuid": "0f5416b7-f8f4-4e05-8c47-34a480e113e7", 00:11:30.014 "is_configured": false, 00:11:30.014 "data_offset": 0, 00:11:30.014 "data_size": 65536 00:11:30.014 }, 00:11:30.014 { 00:11:30.014 "name": "BaseBdev3", 00:11:30.014 "uuid": "a939dc50-b505-45ca-a92d-b9bf40335513", 00:11:30.014 "is_configured": true, 00:11:30.014 "data_offset": 0, 00:11:30.014 "data_size": 65536 00:11:30.014 }, 00:11:30.014 { 00:11:30.014 "name": "BaseBdev4", 00:11:30.014 "uuid": "85428dae-3fc3-40af-8191-cb5f14a40f04", 00:11:30.014 "is_configured": true, 00:11:30.014 "data_offset": 0, 00:11:30.014 "data_size": 65536 00:11:30.014 } 00:11:30.014 ] 00:11:30.014 }' 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.014 07:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.274 [2024-11-29 07:43:20.183594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.274 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.275 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.275 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.275 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.534 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.534 "name": "Existed_Raid", 00:11:30.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.534 "strip_size_kb": 64, 00:11:30.534 "state": "configuring", 00:11:30.534 "raid_level": "concat", 00:11:30.534 "superblock": false, 00:11:30.534 "num_base_bdevs": 4, 00:11:30.534 "num_base_bdevs_discovered": 2, 00:11:30.534 "num_base_bdevs_operational": 4, 00:11:30.534 "base_bdevs_list": [ 00:11:30.534 { 00:11:30.534 "name": "BaseBdev1", 00:11:30.534 "uuid": "16724f34-2ebd-45cf-bfce-5b29ea60ce5d", 00:11:30.534 "is_configured": true, 00:11:30.534 "data_offset": 0, 00:11:30.534 "data_size": 65536 00:11:30.534 }, 00:11:30.534 { 00:11:30.534 "name": null, 00:11:30.534 "uuid": "0f5416b7-f8f4-4e05-8c47-34a480e113e7", 00:11:30.534 "is_configured": false, 00:11:30.534 "data_offset": 0, 00:11:30.534 "data_size": 65536 00:11:30.534 }, 00:11:30.534 { 00:11:30.534 "name": null, 00:11:30.534 "uuid": "a939dc50-b505-45ca-a92d-b9bf40335513", 00:11:30.534 "is_configured": false, 00:11:30.534 "data_offset": 0, 00:11:30.534 "data_size": 65536 00:11:30.534 }, 00:11:30.534 { 00:11:30.534 "name": "BaseBdev4", 00:11:30.534 "uuid": "85428dae-3fc3-40af-8191-cb5f14a40f04", 00:11:30.534 "is_configured": true, 00:11:30.534 "data_offset": 0, 00:11:30.534 "data_size": 65536 00:11:30.534 } 00:11:30.534 ] 00:11:30.534 }' 00:11:30.534 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.534 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.794 [2024-11-29 07:43:20.686731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.794 "name": "Existed_Raid", 00:11:30.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.794 "strip_size_kb": 64, 00:11:30.794 "state": "configuring", 00:11:30.794 "raid_level": "concat", 00:11:30.794 "superblock": false, 00:11:30.794 "num_base_bdevs": 4, 00:11:30.794 "num_base_bdevs_discovered": 3, 00:11:30.794 "num_base_bdevs_operational": 4, 00:11:30.794 "base_bdevs_list": [ 00:11:30.794 { 00:11:30.794 "name": "BaseBdev1", 00:11:30.794 "uuid": "16724f34-2ebd-45cf-bfce-5b29ea60ce5d", 00:11:30.794 "is_configured": true, 00:11:30.794 "data_offset": 0, 00:11:30.794 "data_size": 65536 00:11:30.794 }, 00:11:30.794 { 00:11:30.794 "name": null, 00:11:30.794 "uuid": "0f5416b7-f8f4-4e05-8c47-34a480e113e7", 00:11:30.794 "is_configured": false, 00:11:30.794 "data_offset": 0, 00:11:30.794 "data_size": 65536 00:11:30.794 }, 00:11:30.794 { 00:11:30.794 "name": "BaseBdev3", 00:11:30.794 "uuid": "a939dc50-b505-45ca-a92d-b9bf40335513", 00:11:30.794 "is_configured": true, 00:11:30.794 "data_offset": 0, 00:11:30.794 "data_size": 65536 00:11:30.794 }, 00:11:30.794 { 00:11:30.794 "name": "BaseBdev4", 00:11:30.794 "uuid": "85428dae-3fc3-40af-8191-cb5f14a40f04", 00:11:30.794 "is_configured": true, 00:11:30.794 "data_offset": 0, 00:11:30.794 "data_size": 65536 00:11:30.794 } 00:11:30.794 ] 00:11:30.794 }' 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.794 07:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.363 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.363 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.363 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.363 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:31.363 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.363 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:31.363 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:31.363 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.363 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.363 [2024-11-29 07:43:21.137988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.364 "name": "Existed_Raid", 00:11:31.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.364 "strip_size_kb": 64, 00:11:31.364 "state": "configuring", 00:11:31.364 "raid_level": "concat", 00:11:31.364 "superblock": false, 00:11:31.364 "num_base_bdevs": 4, 00:11:31.364 "num_base_bdevs_discovered": 2, 00:11:31.364 "num_base_bdevs_operational": 4, 00:11:31.364 "base_bdevs_list": [ 00:11:31.364 { 00:11:31.364 "name": null, 00:11:31.364 "uuid": "16724f34-2ebd-45cf-bfce-5b29ea60ce5d", 00:11:31.364 "is_configured": false, 00:11:31.364 "data_offset": 0, 00:11:31.364 "data_size": 65536 00:11:31.364 }, 00:11:31.364 { 00:11:31.364 "name": null, 00:11:31.364 "uuid": "0f5416b7-f8f4-4e05-8c47-34a480e113e7", 00:11:31.364 "is_configured": false, 00:11:31.364 "data_offset": 0, 00:11:31.364 "data_size": 65536 00:11:31.364 }, 00:11:31.364 { 00:11:31.364 "name": "BaseBdev3", 00:11:31.364 "uuid": "a939dc50-b505-45ca-a92d-b9bf40335513", 00:11:31.364 "is_configured": true, 00:11:31.364 "data_offset": 0, 00:11:31.364 "data_size": 65536 00:11:31.364 }, 00:11:31.364 { 00:11:31.364 "name": "BaseBdev4", 00:11:31.364 "uuid": "85428dae-3fc3-40af-8191-cb5f14a40f04", 00:11:31.364 "is_configured": true, 00:11:31.364 "data_offset": 0, 00:11:31.364 "data_size": 65536 00:11:31.364 } 00:11:31.364 ] 00:11:31.364 }' 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.364 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.936 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.936 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.936 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.936 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:31.936 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.936 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:31.936 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.937 [2024-11-29 07:43:21.707499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.937 "name": "Existed_Raid", 00:11:31.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.937 "strip_size_kb": 64, 00:11:31.937 "state": "configuring", 00:11:31.937 "raid_level": "concat", 00:11:31.937 "superblock": false, 00:11:31.937 "num_base_bdevs": 4, 00:11:31.937 "num_base_bdevs_discovered": 3, 00:11:31.937 "num_base_bdevs_operational": 4, 00:11:31.937 "base_bdevs_list": [ 00:11:31.937 { 00:11:31.937 "name": null, 00:11:31.937 "uuid": "16724f34-2ebd-45cf-bfce-5b29ea60ce5d", 00:11:31.937 "is_configured": false, 00:11:31.937 "data_offset": 0, 00:11:31.937 "data_size": 65536 00:11:31.937 }, 00:11:31.937 { 00:11:31.937 "name": "BaseBdev2", 00:11:31.937 "uuid": "0f5416b7-f8f4-4e05-8c47-34a480e113e7", 00:11:31.937 "is_configured": true, 00:11:31.937 "data_offset": 0, 00:11:31.937 "data_size": 65536 00:11:31.937 }, 00:11:31.937 { 00:11:31.937 "name": "BaseBdev3", 00:11:31.937 "uuid": "a939dc50-b505-45ca-a92d-b9bf40335513", 00:11:31.937 "is_configured": true, 00:11:31.937 "data_offset": 0, 00:11:31.937 "data_size": 65536 00:11:31.937 }, 00:11:31.937 { 00:11:31.937 "name": "BaseBdev4", 00:11:31.937 "uuid": "85428dae-3fc3-40af-8191-cb5f14a40f04", 00:11:31.937 "is_configured": true, 00:11:31.937 "data_offset": 0, 00:11:31.937 "data_size": 65536 00:11:31.937 } 00:11:31.937 ] 00:11:31.937 }' 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.937 07:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 16724f34-2ebd-45cf-bfce-5b29ea60ce5d 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.503 [2024-11-29 07:43:22.265779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:32.503 [2024-11-29 07:43:22.265887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:32.503 [2024-11-29 07:43:22.265911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:32.503 [2024-11-29 07:43:22.266224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:32.503 [2024-11-29 07:43:22.266412] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:32.503 [2024-11-29 07:43:22.266453] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:32.503 [2024-11-29 07:43:22.266708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.503 NewBaseBdev 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.503 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.503 [ 00:11:32.503 { 00:11:32.503 "name": "NewBaseBdev", 00:11:32.503 "aliases": [ 00:11:32.503 "16724f34-2ebd-45cf-bfce-5b29ea60ce5d" 00:11:32.503 ], 00:11:32.503 "product_name": "Malloc disk", 00:11:32.503 "block_size": 512, 00:11:32.503 "num_blocks": 65536, 00:11:32.503 "uuid": "16724f34-2ebd-45cf-bfce-5b29ea60ce5d", 00:11:32.503 "assigned_rate_limits": { 00:11:32.503 "rw_ios_per_sec": 0, 00:11:32.503 "rw_mbytes_per_sec": 0, 00:11:32.503 "r_mbytes_per_sec": 0, 00:11:32.503 "w_mbytes_per_sec": 0 00:11:32.503 }, 00:11:32.503 "claimed": true, 00:11:32.503 "claim_type": "exclusive_write", 00:11:32.503 "zoned": false, 00:11:32.503 "supported_io_types": { 00:11:32.503 "read": true, 00:11:32.503 "write": true, 00:11:32.503 "unmap": true, 00:11:32.503 "flush": true, 00:11:32.503 "reset": true, 00:11:32.503 "nvme_admin": false, 00:11:32.503 "nvme_io": false, 00:11:32.503 "nvme_io_md": false, 00:11:32.503 "write_zeroes": true, 00:11:32.503 "zcopy": true, 00:11:32.504 "get_zone_info": false, 00:11:32.504 "zone_management": false, 00:11:32.504 "zone_append": false, 00:11:32.504 "compare": false, 00:11:32.504 "compare_and_write": false, 00:11:32.504 "abort": true, 00:11:32.504 "seek_hole": false, 00:11:32.504 "seek_data": false, 00:11:32.504 "copy": true, 00:11:32.504 "nvme_iov_md": false 00:11:32.504 }, 00:11:32.504 "memory_domains": [ 00:11:32.504 { 00:11:32.504 "dma_device_id": "system", 00:11:32.504 "dma_device_type": 1 00:11:32.504 }, 00:11:32.504 { 00:11:32.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.504 "dma_device_type": 2 00:11:32.504 } 00:11:32.504 ], 00:11:32.504 "driver_specific": {} 00:11:32.504 } 00:11:32.504 ] 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.504 "name": "Existed_Raid", 00:11:32.504 "uuid": "abd03bfe-5a91-4f9b-807c-972e1763e9b6", 00:11:32.504 "strip_size_kb": 64, 00:11:32.504 "state": "online", 00:11:32.504 "raid_level": "concat", 00:11:32.504 "superblock": false, 00:11:32.504 "num_base_bdevs": 4, 00:11:32.504 "num_base_bdevs_discovered": 4, 00:11:32.504 "num_base_bdevs_operational": 4, 00:11:32.504 "base_bdevs_list": [ 00:11:32.504 { 00:11:32.504 "name": "NewBaseBdev", 00:11:32.504 "uuid": "16724f34-2ebd-45cf-bfce-5b29ea60ce5d", 00:11:32.504 "is_configured": true, 00:11:32.504 "data_offset": 0, 00:11:32.504 "data_size": 65536 00:11:32.504 }, 00:11:32.504 { 00:11:32.504 "name": "BaseBdev2", 00:11:32.504 "uuid": "0f5416b7-f8f4-4e05-8c47-34a480e113e7", 00:11:32.504 "is_configured": true, 00:11:32.504 "data_offset": 0, 00:11:32.504 "data_size": 65536 00:11:32.504 }, 00:11:32.504 { 00:11:32.504 "name": "BaseBdev3", 00:11:32.504 "uuid": "a939dc50-b505-45ca-a92d-b9bf40335513", 00:11:32.504 "is_configured": true, 00:11:32.504 "data_offset": 0, 00:11:32.504 "data_size": 65536 00:11:32.504 }, 00:11:32.504 { 00:11:32.504 "name": "BaseBdev4", 00:11:32.504 "uuid": "85428dae-3fc3-40af-8191-cb5f14a40f04", 00:11:32.504 "is_configured": true, 00:11:32.504 "data_offset": 0, 00:11:32.504 "data_size": 65536 00:11:32.504 } 00:11:32.504 ] 00:11:32.504 }' 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.504 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.070 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:33.070 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:33.070 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:33.070 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:33.070 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:33.070 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:33.070 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:33.070 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.070 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.070 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:33.070 [2024-11-29 07:43:22.773300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.070 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.070 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.070 "name": "Existed_Raid", 00:11:33.070 "aliases": [ 00:11:33.070 "abd03bfe-5a91-4f9b-807c-972e1763e9b6" 00:11:33.070 ], 00:11:33.070 "product_name": "Raid Volume", 00:11:33.070 "block_size": 512, 00:11:33.070 "num_blocks": 262144, 00:11:33.070 "uuid": "abd03bfe-5a91-4f9b-807c-972e1763e9b6", 00:11:33.070 "assigned_rate_limits": { 00:11:33.070 "rw_ios_per_sec": 0, 00:11:33.070 "rw_mbytes_per_sec": 0, 00:11:33.070 "r_mbytes_per_sec": 0, 00:11:33.070 "w_mbytes_per_sec": 0 00:11:33.070 }, 00:11:33.070 "claimed": false, 00:11:33.070 "zoned": false, 00:11:33.070 "supported_io_types": { 00:11:33.070 "read": true, 00:11:33.070 "write": true, 00:11:33.070 "unmap": true, 00:11:33.070 "flush": true, 00:11:33.070 "reset": true, 00:11:33.070 "nvme_admin": false, 00:11:33.070 "nvme_io": false, 00:11:33.070 "nvme_io_md": false, 00:11:33.070 "write_zeroes": true, 00:11:33.070 "zcopy": false, 00:11:33.070 "get_zone_info": false, 00:11:33.070 "zone_management": false, 00:11:33.070 "zone_append": false, 00:11:33.070 "compare": false, 00:11:33.070 "compare_and_write": false, 00:11:33.070 "abort": false, 00:11:33.070 "seek_hole": false, 00:11:33.070 "seek_data": false, 00:11:33.070 "copy": false, 00:11:33.070 "nvme_iov_md": false 00:11:33.070 }, 00:11:33.070 "memory_domains": [ 00:11:33.070 { 00:11:33.070 "dma_device_id": "system", 00:11:33.070 "dma_device_type": 1 00:11:33.070 }, 00:11:33.070 { 00:11:33.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.070 "dma_device_type": 2 00:11:33.070 }, 00:11:33.070 { 00:11:33.070 "dma_device_id": "system", 00:11:33.070 "dma_device_type": 1 00:11:33.070 }, 00:11:33.070 { 00:11:33.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.070 "dma_device_type": 2 00:11:33.070 }, 00:11:33.070 { 00:11:33.070 "dma_device_id": "system", 00:11:33.070 "dma_device_type": 1 00:11:33.070 }, 00:11:33.070 { 00:11:33.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.070 "dma_device_type": 2 00:11:33.070 }, 00:11:33.070 { 00:11:33.070 "dma_device_id": "system", 00:11:33.070 "dma_device_type": 1 00:11:33.070 }, 00:11:33.070 { 00:11:33.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.070 "dma_device_type": 2 00:11:33.070 } 00:11:33.070 ], 00:11:33.070 "driver_specific": { 00:11:33.070 "raid": { 00:11:33.070 "uuid": "abd03bfe-5a91-4f9b-807c-972e1763e9b6", 00:11:33.070 "strip_size_kb": 64, 00:11:33.070 "state": "online", 00:11:33.071 "raid_level": "concat", 00:11:33.071 "superblock": false, 00:11:33.071 "num_base_bdevs": 4, 00:11:33.071 "num_base_bdevs_discovered": 4, 00:11:33.071 "num_base_bdevs_operational": 4, 00:11:33.071 "base_bdevs_list": [ 00:11:33.071 { 00:11:33.071 "name": "NewBaseBdev", 00:11:33.071 "uuid": "16724f34-2ebd-45cf-bfce-5b29ea60ce5d", 00:11:33.071 "is_configured": true, 00:11:33.071 "data_offset": 0, 00:11:33.071 "data_size": 65536 00:11:33.071 }, 00:11:33.071 { 00:11:33.071 "name": "BaseBdev2", 00:11:33.071 "uuid": "0f5416b7-f8f4-4e05-8c47-34a480e113e7", 00:11:33.071 "is_configured": true, 00:11:33.071 "data_offset": 0, 00:11:33.071 "data_size": 65536 00:11:33.071 }, 00:11:33.071 { 00:11:33.071 "name": "BaseBdev3", 00:11:33.071 "uuid": "a939dc50-b505-45ca-a92d-b9bf40335513", 00:11:33.071 "is_configured": true, 00:11:33.071 "data_offset": 0, 00:11:33.071 "data_size": 65536 00:11:33.071 }, 00:11:33.071 { 00:11:33.071 "name": "BaseBdev4", 00:11:33.071 "uuid": "85428dae-3fc3-40af-8191-cb5f14a40f04", 00:11:33.071 "is_configured": true, 00:11:33.071 "data_offset": 0, 00:11:33.071 "data_size": 65536 00:11:33.071 } 00:11:33.071 ] 00:11:33.071 } 00:11:33.071 } 00:11:33.071 }' 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:33.071 BaseBdev2 00:11:33.071 BaseBdev3 00:11:33.071 BaseBdev4' 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.071 07:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.330 [2024-11-29 07:43:23.072451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.330 [2024-11-29 07:43:23.072480] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.330 [2024-11-29 07:43:23.072554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.330 [2024-11-29 07:43:23.072624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.330 [2024-11-29 07:43:23.072634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71054 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71054 ']' 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71054 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.330 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71054 00:11:33.331 killing process with pid 71054 00:11:33.331 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.331 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.331 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71054' 00:11:33.331 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71054 00:11:33.331 [2024-11-29 07:43:23.121062] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.331 07:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71054 00:11:33.591 [2024-11-29 07:43:23.513326] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:34.970 00:11:34.970 real 0m11.348s 00:11:34.970 user 0m18.036s 00:11:34.970 sys 0m1.992s 00:11:34.970 ************************************ 00:11:34.970 END TEST raid_state_function_test 00:11:34.970 ************************************ 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.970 07:43:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:34.970 07:43:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:34.970 07:43:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.970 07:43:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.970 ************************************ 00:11:34.970 START TEST raid_state_function_test_sb 00:11:34.970 ************************************ 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71721 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71721' 00:11:34.970 Process raid pid: 71721 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71721 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71721 ']' 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.970 07:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.970 [2024-11-29 07:43:24.805218] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:34.970 [2024-11-29 07:43:24.805415] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.230 [2024-11-29 07:43:24.968036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.230 [2024-11-29 07:43:25.080806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.488 [2024-11-29 07:43:25.287280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.488 [2024-11-29 07:43:25.287371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.748 [2024-11-29 07:43:25.636497] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.748 [2024-11-29 07:43:25.636557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.748 [2024-11-29 07:43:25.636568] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.748 [2024-11-29 07:43:25.636578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.748 [2024-11-29 07:43:25.636589] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.748 [2024-11-29 07:43:25.636599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.748 [2024-11-29 07:43:25.636605] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:35.748 [2024-11-29 07:43:25.636613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.748 07:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.009 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.009 "name": "Existed_Raid", 00:11:36.009 "uuid": "b90ecd17-2305-4fd1-95d0-d2f85141dc96", 00:11:36.009 "strip_size_kb": 64, 00:11:36.009 "state": "configuring", 00:11:36.009 "raid_level": "concat", 00:11:36.009 "superblock": true, 00:11:36.009 "num_base_bdevs": 4, 00:11:36.009 "num_base_bdevs_discovered": 0, 00:11:36.009 "num_base_bdevs_operational": 4, 00:11:36.009 "base_bdevs_list": [ 00:11:36.009 { 00:11:36.009 "name": "BaseBdev1", 00:11:36.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.009 "is_configured": false, 00:11:36.009 "data_offset": 0, 00:11:36.009 "data_size": 0 00:11:36.009 }, 00:11:36.009 { 00:11:36.009 "name": "BaseBdev2", 00:11:36.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.009 "is_configured": false, 00:11:36.009 "data_offset": 0, 00:11:36.009 "data_size": 0 00:11:36.009 }, 00:11:36.009 { 00:11:36.009 "name": "BaseBdev3", 00:11:36.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.009 "is_configured": false, 00:11:36.009 "data_offset": 0, 00:11:36.009 "data_size": 0 00:11:36.009 }, 00:11:36.009 { 00:11:36.009 "name": "BaseBdev4", 00:11:36.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.009 "is_configured": false, 00:11:36.009 "data_offset": 0, 00:11:36.009 "data_size": 0 00:11:36.009 } 00:11:36.009 ] 00:11:36.009 }' 00:11:36.009 07:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.009 07:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.270 [2024-11-29 07:43:26.031765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.270 [2024-11-29 07:43:26.031877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.270 [2024-11-29 07:43:26.043791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.270 [2024-11-29 07:43:26.043872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.270 [2024-11-29 07:43:26.043900] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.270 [2024-11-29 07:43:26.043924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.270 [2024-11-29 07:43:26.043943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.270 [2024-11-29 07:43:26.043964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.270 [2024-11-29 07:43:26.043982] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:36.270 [2024-11-29 07:43:26.044003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.270 [2024-11-29 07:43:26.091325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.270 BaseBdev1 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.270 [ 00:11:36.270 { 00:11:36.270 "name": "BaseBdev1", 00:11:36.270 "aliases": [ 00:11:36.270 "69ba5045-f7b5-49f5-9699-081356afc027" 00:11:36.270 ], 00:11:36.270 "product_name": "Malloc disk", 00:11:36.270 "block_size": 512, 00:11:36.270 "num_blocks": 65536, 00:11:36.270 "uuid": "69ba5045-f7b5-49f5-9699-081356afc027", 00:11:36.270 "assigned_rate_limits": { 00:11:36.270 "rw_ios_per_sec": 0, 00:11:36.270 "rw_mbytes_per_sec": 0, 00:11:36.270 "r_mbytes_per_sec": 0, 00:11:36.270 "w_mbytes_per_sec": 0 00:11:36.270 }, 00:11:36.270 "claimed": true, 00:11:36.270 "claim_type": "exclusive_write", 00:11:36.270 "zoned": false, 00:11:36.270 "supported_io_types": { 00:11:36.270 "read": true, 00:11:36.270 "write": true, 00:11:36.270 "unmap": true, 00:11:36.270 "flush": true, 00:11:36.270 "reset": true, 00:11:36.270 "nvme_admin": false, 00:11:36.270 "nvme_io": false, 00:11:36.270 "nvme_io_md": false, 00:11:36.270 "write_zeroes": true, 00:11:36.270 "zcopy": true, 00:11:36.270 "get_zone_info": false, 00:11:36.270 "zone_management": false, 00:11:36.270 "zone_append": false, 00:11:36.270 "compare": false, 00:11:36.270 "compare_and_write": false, 00:11:36.270 "abort": true, 00:11:36.270 "seek_hole": false, 00:11:36.270 "seek_data": false, 00:11:36.270 "copy": true, 00:11:36.270 "nvme_iov_md": false 00:11:36.270 }, 00:11:36.270 "memory_domains": [ 00:11:36.270 { 00:11:36.270 "dma_device_id": "system", 00:11:36.270 "dma_device_type": 1 00:11:36.270 }, 00:11:36.270 { 00:11:36.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.270 "dma_device_type": 2 00:11:36.270 } 00:11:36.270 ], 00:11:36.270 "driver_specific": {} 00:11:36.270 } 00:11:36.270 ] 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.270 "name": "Existed_Raid", 00:11:36.270 "uuid": "afcbce10-8cc6-4f99-9a57-71e9170e16c5", 00:11:36.270 "strip_size_kb": 64, 00:11:36.270 "state": "configuring", 00:11:36.270 "raid_level": "concat", 00:11:36.270 "superblock": true, 00:11:36.270 "num_base_bdevs": 4, 00:11:36.270 "num_base_bdevs_discovered": 1, 00:11:36.270 "num_base_bdevs_operational": 4, 00:11:36.270 "base_bdevs_list": [ 00:11:36.270 { 00:11:36.270 "name": "BaseBdev1", 00:11:36.270 "uuid": "69ba5045-f7b5-49f5-9699-081356afc027", 00:11:36.270 "is_configured": true, 00:11:36.270 "data_offset": 2048, 00:11:36.270 "data_size": 63488 00:11:36.270 }, 00:11:36.270 { 00:11:36.270 "name": "BaseBdev2", 00:11:36.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.270 "is_configured": false, 00:11:36.270 "data_offset": 0, 00:11:36.270 "data_size": 0 00:11:36.270 }, 00:11:36.270 { 00:11:36.270 "name": "BaseBdev3", 00:11:36.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.270 "is_configured": false, 00:11:36.270 "data_offset": 0, 00:11:36.270 "data_size": 0 00:11:36.270 }, 00:11:36.270 { 00:11:36.270 "name": "BaseBdev4", 00:11:36.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.270 "is_configured": false, 00:11:36.270 "data_offset": 0, 00:11:36.270 "data_size": 0 00:11:36.270 } 00:11:36.270 ] 00:11:36.270 }' 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.270 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.840 [2024-11-29 07:43:26.594517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.840 [2024-11-29 07:43:26.594638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.840 [2024-11-29 07:43:26.606547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.840 [2024-11-29 07:43:26.608386] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.840 [2024-11-29 07:43:26.608430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.840 [2024-11-29 07:43:26.608441] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.840 [2024-11-29 07:43:26.608452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.840 [2024-11-29 07:43:26.608459] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:36.840 [2024-11-29 07:43:26.608467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.840 "name": "Existed_Raid", 00:11:36.840 "uuid": "94ed7ec6-a238-4c00-b105-8acaf3480824", 00:11:36.840 "strip_size_kb": 64, 00:11:36.840 "state": "configuring", 00:11:36.840 "raid_level": "concat", 00:11:36.840 "superblock": true, 00:11:36.840 "num_base_bdevs": 4, 00:11:36.840 "num_base_bdevs_discovered": 1, 00:11:36.840 "num_base_bdevs_operational": 4, 00:11:36.840 "base_bdevs_list": [ 00:11:36.840 { 00:11:36.840 "name": "BaseBdev1", 00:11:36.840 "uuid": "69ba5045-f7b5-49f5-9699-081356afc027", 00:11:36.840 "is_configured": true, 00:11:36.840 "data_offset": 2048, 00:11:36.840 "data_size": 63488 00:11:36.840 }, 00:11:36.840 { 00:11:36.840 "name": "BaseBdev2", 00:11:36.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.840 "is_configured": false, 00:11:36.840 "data_offset": 0, 00:11:36.840 "data_size": 0 00:11:36.840 }, 00:11:36.840 { 00:11:36.840 "name": "BaseBdev3", 00:11:36.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.840 "is_configured": false, 00:11:36.840 "data_offset": 0, 00:11:36.840 "data_size": 0 00:11:36.840 }, 00:11:36.840 { 00:11:36.840 "name": "BaseBdev4", 00:11:36.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.840 "is_configured": false, 00:11:36.840 "data_offset": 0, 00:11:36.840 "data_size": 0 00:11:36.840 } 00:11:36.840 ] 00:11:36.840 }' 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.840 07:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.408 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.408 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.408 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.408 [2024-11-29 07:43:27.116456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.408 BaseBdev2 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.409 [ 00:11:37.409 { 00:11:37.409 "name": "BaseBdev2", 00:11:37.409 "aliases": [ 00:11:37.409 "58ac099b-ff54-4899-b8b9-6be24825e0bc" 00:11:37.409 ], 00:11:37.409 "product_name": "Malloc disk", 00:11:37.409 "block_size": 512, 00:11:37.409 "num_blocks": 65536, 00:11:37.409 "uuid": "58ac099b-ff54-4899-b8b9-6be24825e0bc", 00:11:37.409 "assigned_rate_limits": { 00:11:37.409 "rw_ios_per_sec": 0, 00:11:37.409 "rw_mbytes_per_sec": 0, 00:11:37.409 "r_mbytes_per_sec": 0, 00:11:37.409 "w_mbytes_per_sec": 0 00:11:37.409 }, 00:11:37.409 "claimed": true, 00:11:37.409 "claim_type": "exclusive_write", 00:11:37.409 "zoned": false, 00:11:37.409 "supported_io_types": { 00:11:37.409 "read": true, 00:11:37.409 "write": true, 00:11:37.409 "unmap": true, 00:11:37.409 "flush": true, 00:11:37.409 "reset": true, 00:11:37.409 "nvme_admin": false, 00:11:37.409 "nvme_io": false, 00:11:37.409 "nvme_io_md": false, 00:11:37.409 "write_zeroes": true, 00:11:37.409 "zcopy": true, 00:11:37.409 "get_zone_info": false, 00:11:37.409 "zone_management": false, 00:11:37.409 "zone_append": false, 00:11:37.409 "compare": false, 00:11:37.409 "compare_and_write": false, 00:11:37.409 "abort": true, 00:11:37.409 "seek_hole": false, 00:11:37.409 "seek_data": false, 00:11:37.409 "copy": true, 00:11:37.409 "nvme_iov_md": false 00:11:37.409 }, 00:11:37.409 "memory_domains": [ 00:11:37.409 { 00:11:37.409 "dma_device_id": "system", 00:11:37.409 "dma_device_type": 1 00:11:37.409 }, 00:11:37.409 { 00:11:37.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.409 "dma_device_type": 2 00:11:37.409 } 00:11:37.409 ], 00:11:37.409 "driver_specific": {} 00:11:37.409 } 00:11:37.409 ] 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.409 "name": "Existed_Raid", 00:11:37.409 "uuid": "94ed7ec6-a238-4c00-b105-8acaf3480824", 00:11:37.409 "strip_size_kb": 64, 00:11:37.409 "state": "configuring", 00:11:37.409 "raid_level": "concat", 00:11:37.409 "superblock": true, 00:11:37.409 "num_base_bdevs": 4, 00:11:37.409 "num_base_bdevs_discovered": 2, 00:11:37.409 "num_base_bdevs_operational": 4, 00:11:37.409 "base_bdevs_list": [ 00:11:37.409 { 00:11:37.409 "name": "BaseBdev1", 00:11:37.409 "uuid": "69ba5045-f7b5-49f5-9699-081356afc027", 00:11:37.409 "is_configured": true, 00:11:37.409 "data_offset": 2048, 00:11:37.409 "data_size": 63488 00:11:37.409 }, 00:11:37.409 { 00:11:37.409 "name": "BaseBdev2", 00:11:37.409 "uuid": "58ac099b-ff54-4899-b8b9-6be24825e0bc", 00:11:37.409 "is_configured": true, 00:11:37.409 "data_offset": 2048, 00:11:37.409 "data_size": 63488 00:11:37.409 }, 00:11:37.409 { 00:11:37.409 "name": "BaseBdev3", 00:11:37.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.409 "is_configured": false, 00:11:37.409 "data_offset": 0, 00:11:37.409 "data_size": 0 00:11:37.409 }, 00:11:37.409 { 00:11:37.409 "name": "BaseBdev4", 00:11:37.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.409 "is_configured": false, 00:11:37.409 "data_offset": 0, 00:11:37.409 "data_size": 0 00:11:37.409 } 00:11:37.409 ] 00:11:37.409 }' 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.409 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.980 [2024-11-29 07:43:27.677044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.980 BaseBdev3 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.980 [ 00:11:37.980 { 00:11:37.980 "name": "BaseBdev3", 00:11:37.980 "aliases": [ 00:11:37.980 "1bd4f1f6-b128-4675-840c-4451f852a813" 00:11:37.980 ], 00:11:37.980 "product_name": "Malloc disk", 00:11:37.980 "block_size": 512, 00:11:37.980 "num_blocks": 65536, 00:11:37.980 "uuid": "1bd4f1f6-b128-4675-840c-4451f852a813", 00:11:37.980 "assigned_rate_limits": { 00:11:37.980 "rw_ios_per_sec": 0, 00:11:37.980 "rw_mbytes_per_sec": 0, 00:11:37.980 "r_mbytes_per_sec": 0, 00:11:37.980 "w_mbytes_per_sec": 0 00:11:37.980 }, 00:11:37.980 "claimed": true, 00:11:37.980 "claim_type": "exclusive_write", 00:11:37.980 "zoned": false, 00:11:37.980 "supported_io_types": { 00:11:37.980 "read": true, 00:11:37.980 "write": true, 00:11:37.980 "unmap": true, 00:11:37.980 "flush": true, 00:11:37.980 "reset": true, 00:11:37.980 "nvme_admin": false, 00:11:37.980 "nvme_io": false, 00:11:37.980 "nvme_io_md": false, 00:11:37.980 "write_zeroes": true, 00:11:37.980 "zcopy": true, 00:11:37.980 "get_zone_info": false, 00:11:37.980 "zone_management": false, 00:11:37.980 "zone_append": false, 00:11:37.980 "compare": false, 00:11:37.980 "compare_and_write": false, 00:11:37.980 "abort": true, 00:11:37.980 "seek_hole": false, 00:11:37.980 "seek_data": false, 00:11:37.980 "copy": true, 00:11:37.980 "nvme_iov_md": false 00:11:37.980 }, 00:11:37.980 "memory_domains": [ 00:11:37.980 { 00:11:37.980 "dma_device_id": "system", 00:11:37.980 "dma_device_type": 1 00:11:37.980 }, 00:11:37.980 { 00:11:37.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.980 "dma_device_type": 2 00:11:37.980 } 00:11:37.980 ], 00:11:37.980 "driver_specific": {} 00:11:37.980 } 00:11:37.980 ] 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.980 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.980 "name": "Existed_Raid", 00:11:37.980 "uuid": "94ed7ec6-a238-4c00-b105-8acaf3480824", 00:11:37.980 "strip_size_kb": 64, 00:11:37.980 "state": "configuring", 00:11:37.980 "raid_level": "concat", 00:11:37.980 "superblock": true, 00:11:37.980 "num_base_bdevs": 4, 00:11:37.980 "num_base_bdevs_discovered": 3, 00:11:37.980 "num_base_bdevs_operational": 4, 00:11:37.981 "base_bdevs_list": [ 00:11:37.981 { 00:11:37.981 "name": "BaseBdev1", 00:11:37.981 "uuid": "69ba5045-f7b5-49f5-9699-081356afc027", 00:11:37.981 "is_configured": true, 00:11:37.981 "data_offset": 2048, 00:11:37.981 "data_size": 63488 00:11:37.981 }, 00:11:37.981 { 00:11:37.981 "name": "BaseBdev2", 00:11:37.981 "uuid": "58ac099b-ff54-4899-b8b9-6be24825e0bc", 00:11:37.981 "is_configured": true, 00:11:37.981 "data_offset": 2048, 00:11:37.981 "data_size": 63488 00:11:37.981 }, 00:11:37.981 { 00:11:37.981 "name": "BaseBdev3", 00:11:37.981 "uuid": "1bd4f1f6-b128-4675-840c-4451f852a813", 00:11:37.981 "is_configured": true, 00:11:37.981 "data_offset": 2048, 00:11:37.981 "data_size": 63488 00:11:37.981 }, 00:11:37.981 { 00:11:37.981 "name": "BaseBdev4", 00:11:37.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.981 "is_configured": false, 00:11:37.981 "data_offset": 0, 00:11:37.981 "data_size": 0 00:11:37.981 } 00:11:37.981 ] 00:11:37.981 }' 00:11:37.981 07:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.981 07:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.241 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:38.241 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.241 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.503 [2024-11-29 07:43:28.198061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.503 [2024-11-29 07:43:28.198366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:38.503 [2024-11-29 07:43:28.198384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:38.503 [2024-11-29 07:43:28.198666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:38.503 [2024-11-29 07:43:28.198836] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:38.503 [2024-11-29 07:43:28.198847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:38.503 [2024-11-29 07:43:28.198979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.503 BaseBdev4 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.503 [ 00:11:38.503 { 00:11:38.503 "name": "BaseBdev4", 00:11:38.503 "aliases": [ 00:11:38.503 "28a7e2cd-5207-464f-97b7-b3147a4a5d88" 00:11:38.503 ], 00:11:38.503 "product_name": "Malloc disk", 00:11:38.503 "block_size": 512, 00:11:38.503 "num_blocks": 65536, 00:11:38.503 "uuid": "28a7e2cd-5207-464f-97b7-b3147a4a5d88", 00:11:38.503 "assigned_rate_limits": { 00:11:38.503 "rw_ios_per_sec": 0, 00:11:38.503 "rw_mbytes_per_sec": 0, 00:11:38.503 "r_mbytes_per_sec": 0, 00:11:38.503 "w_mbytes_per_sec": 0 00:11:38.503 }, 00:11:38.503 "claimed": true, 00:11:38.503 "claim_type": "exclusive_write", 00:11:38.503 "zoned": false, 00:11:38.503 "supported_io_types": { 00:11:38.503 "read": true, 00:11:38.503 "write": true, 00:11:38.503 "unmap": true, 00:11:38.503 "flush": true, 00:11:38.503 "reset": true, 00:11:38.503 "nvme_admin": false, 00:11:38.503 "nvme_io": false, 00:11:38.503 "nvme_io_md": false, 00:11:38.503 "write_zeroes": true, 00:11:38.503 "zcopy": true, 00:11:38.503 "get_zone_info": false, 00:11:38.503 "zone_management": false, 00:11:38.503 "zone_append": false, 00:11:38.503 "compare": false, 00:11:38.503 "compare_and_write": false, 00:11:38.503 "abort": true, 00:11:38.503 "seek_hole": false, 00:11:38.503 "seek_data": false, 00:11:38.503 "copy": true, 00:11:38.503 "nvme_iov_md": false 00:11:38.503 }, 00:11:38.503 "memory_domains": [ 00:11:38.503 { 00:11:38.503 "dma_device_id": "system", 00:11:38.503 "dma_device_type": 1 00:11:38.503 }, 00:11:38.503 { 00:11:38.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.503 "dma_device_type": 2 00:11:38.503 } 00:11:38.503 ], 00:11:38.503 "driver_specific": {} 00:11:38.503 } 00:11:38.503 ] 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.503 "name": "Existed_Raid", 00:11:38.503 "uuid": "94ed7ec6-a238-4c00-b105-8acaf3480824", 00:11:38.503 "strip_size_kb": 64, 00:11:38.503 "state": "online", 00:11:38.503 "raid_level": "concat", 00:11:38.503 "superblock": true, 00:11:38.503 "num_base_bdevs": 4, 00:11:38.503 "num_base_bdevs_discovered": 4, 00:11:38.503 "num_base_bdevs_operational": 4, 00:11:38.503 "base_bdevs_list": [ 00:11:38.503 { 00:11:38.503 "name": "BaseBdev1", 00:11:38.503 "uuid": "69ba5045-f7b5-49f5-9699-081356afc027", 00:11:38.503 "is_configured": true, 00:11:38.503 "data_offset": 2048, 00:11:38.503 "data_size": 63488 00:11:38.503 }, 00:11:38.503 { 00:11:38.503 "name": "BaseBdev2", 00:11:38.503 "uuid": "58ac099b-ff54-4899-b8b9-6be24825e0bc", 00:11:38.503 "is_configured": true, 00:11:38.503 "data_offset": 2048, 00:11:38.503 "data_size": 63488 00:11:38.503 }, 00:11:38.503 { 00:11:38.503 "name": "BaseBdev3", 00:11:38.503 "uuid": "1bd4f1f6-b128-4675-840c-4451f852a813", 00:11:38.503 "is_configured": true, 00:11:38.503 "data_offset": 2048, 00:11:38.503 "data_size": 63488 00:11:38.503 }, 00:11:38.503 { 00:11:38.503 "name": "BaseBdev4", 00:11:38.503 "uuid": "28a7e2cd-5207-464f-97b7-b3147a4a5d88", 00:11:38.503 "is_configured": true, 00:11:38.503 "data_offset": 2048, 00:11:38.503 "data_size": 63488 00:11:38.503 } 00:11:38.503 ] 00:11:38.503 }' 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.503 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.763 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:38.763 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:38.763 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.763 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.763 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.763 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.763 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:38.763 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.763 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.763 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.763 [2024-11-29 07:43:28.677638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.763 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:39.023 "name": "Existed_Raid", 00:11:39.023 "aliases": [ 00:11:39.023 "94ed7ec6-a238-4c00-b105-8acaf3480824" 00:11:39.023 ], 00:11:39.023 "product_name": "Raid Volume", 00:11:39.023 "block_size": 512, 00:11:39.023 "num_blocks": 253952, 00:11:39.023 "uuid": "94ed7ec6-a238-4c00-b105-8acaf3480824", 00:11:39.023 "assigned_rate_limits": { 00:11:39.023 "rw_ios_per_sec": 0, 00:11:39.023 "rw_mbytes_per_sec": 0, 00:11:39.023 "r_mbytes_per_sec": 0, 00:11:39.023 "w_mbytes_per_sec": 0 00:11:39.023 }, 00:11:39.023 "claimed": false, 00:11:39.023 "zoned": false, 00:11:39.023 "supported_io_types": { 00:11:39.023 "read": true, 00:11:39.023 "write": true, 00:11:39.023 "unmap": true, 00:11:39.023 "flush": true, 00:11:39.023 "reset": true, 00:11:39.023 "nvme_admin": false, 00:11:39.023 "nvme_io": false, 00:11:39.023 "nvme_io_md": false, 00:11:39.023 "write_zeroes": true, 00:11:39.023 "zcopy": false, 00:11:39.023 "get_zone_info": false, 00:11:39.023 "zone_management": false, 00:11:39.023 "zone_append": false, 00:11:39.023 "compare": false, 00:11:39.023 "compare_and_write": false, 00:11:39.023 "abort": false, 00:11:39.023 "seek_hole": false, 00:11:39.023 "seek_data": false, 00:11:39.023 "copy": false, 00:11:39.023 "nvme_iov_md": false 00:11:39.023 }, 00:11:39.023 "memory_domains": [ 00:11:39.023 { 00:11:39.023 "dma_device_id": "system", 00:11:39.023 "dma_device_type": 1 00:11:39.023 }, 00:11:39.023 { 00:11:39.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.023 "dma_device_type": 2 00:11:39.023 }, 00:11:39.023 { 00:11:39.023 "dma_device_id": "system", 00:11:39.023 "dma_device_type": 1 00:11:39.023 }, 00:11:39.023 { 00:11:39.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.023 "dma_device_type": 2 00:11:39.023 }, 00:11:39.023 { 00:11:39.023 "dma_device_id": "system", 00:11:39.023 "dma_device_type": 1 00:11:39.023 }, 00:11:39.023 { 00:11:39.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.023 "dma_device_type": 2 00:11:39.023 }, 00:11:39.023 { 00:11:39.023 "dma_device_id": "system", 00:11:39.023 "dma_device_type": 1 00:11:39.023 }, 00:11:39.023 { 00:11:39.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.023 "dma_device_type": 2 00:11:39.023 } 00:11:39.023 ], 00:11:39.023 "driver_specific": { 00:11:39.023 "raid": { 00:11:39.023 "uuid": "94ed7ec6-a238-4c00-b105-8acaf3480824", 00:11:39.023 "strip_size_kb": 64, 00:11:39.023 "state": "online", 00:11:39.023 "raid_level": "concat", 00:11:39.023 "superblock": true, 00:11:39.023 "num_base_bdevs": 4, 00:11:39.023 "num_base_bdevs_discovered": 4, 00:11:39.023 "num_base_bdevs_operational": 4, 00:11:39.023 "base_bdevs_list": [ 00:11:39.023 { 00:11:39.023 "name": "BaseBdev1", 00:11:39.023 "uuid": "69ba5045-f7b5-49f5-9699-081356afc027", 00:11:39.023 "is_configured": true, 00:11:39.023 "data_offset": 2048, 00:11:39.023 "data_size": 63488 00:11:39.023 }, 00:11:39.023 { 00:11:39.023 "name": "BaseBdev2", 00:11:39.023 "uuid": "58ac099b-ff54-4899-b8b9-6be24825e0bc", 00:11:39.023 "is_configured": true, 00:11:39.023 "data_offset": 2048, 00:11:39.023 "data_size": 63488 00:11:39.023 }, 00:11:39.023 { 00:11:39.023 "name": "BaseBdev3", 00:11:39.023 "uuid": "1bd4f1f6-b128-4675-840c-4451f852a813", 00:11:39.023 "is_configured": true, 00:11:39.023 "data_offset": 2048, 00:11:39.023 "data_size": 63488 00:11:39.023 }, 00:11:39.023 { 00:11:39.023 "name": "BaseBdev4", 00:11:39.023 "uuid": "28a7e2cd-5207-464f-97b7-b3147a4a5d88", 00:11:39.023 "is_configured": true, 00:11:39.023 "data_offset": 2048, 00:11:39.023 "data_size": 63488 00:11:39.023 } 00:11:39.023 ] 00:11:39.023 } 00:11:39.023 } 00:11:39.023 }' 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:39.023 BaseBdev2 00:11:39.023 BaseBdev3 00:11:39.023 BaseBdev4' 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:39.023 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.024 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.284 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:39.284 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.284 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.284 07:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.284 07:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.284 [2024-11-29 07:43:29.020823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.284 [2024-11-29 07:43:29.020855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.284 [2024-11-29 07:43:29.020909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.284 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.284 "name": "Existed_Raid", 00:11:39.284 "uuid": "94ed7ec6-a238-4c00-b105-8acaf3480824", 00:11:39.284 "strip_size_kb": 64, 00:11:39.284 "state": "offline", 00:11:39.284 "raid_level": "concat", 00:11:39.284 "superblock": true, 00:11:39.284 "num_base_bdevs": 4, 00:11:39.284 "num_base_bdevs_discovered": 3, 00:11:39.284 "num_base_bdevs_operational": 3, 00:11:39.284 "base_bdevs_list": [ 00:11:39.284 { 00:11:39.284 "name": null, 00:11:39.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.284 "is_configured": false, 00:11:39.284 "data_offset": 0, 00:11:39.284 "data_size": 63488 00:11:39.284 }, 00:11:39.284 { 00:11:39.284 "name": "BaseBdev2", 00:11:39.284 "uuid": "58ac099b-ff54-4899-b8b9-6be24825e0bc", 00:11:39.284 "is_configured": true, 00:11:39.284 "data_offset": 2048, 00:11:39.284 "data_size": 63488 00:11:39.284 }, 00:11:39.284 { 00:11:39.284 "name": "BaseBdev3", 00:11:39.284 "uuid": "1bd4f1f6-b128-4675-840c-4451f852a813", 00:11:39.284 "is_configured": true, 00:11:39.285 "data_offset": 2048, 00:11:39.285 "data_size": 63488 00:11:39.285 }, 00:11:39.285 { 00:11:39.285 "name": "BaseBdev4", 00:11:39.285 "uuid": "28a7e2cd-5207-464f-97b7-b3147a4a5d88", 00:11:39.285 "is_configured": true, 00:11:39.285 "data_offset": 2048, 00:11:39.285 "data_size": 63488 00:11:39.285 } 00:11:39.285 ] 00:11:39.285 }' 00:11:39.285 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.285 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.855 [2024-11-29 07:43:29.612175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.855 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.856 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.856 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.856 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.856 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.856 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:39.856 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.856 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.856 [2024-11-29 07:43:29.770936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.116 07:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.116 [2024-11-29 07:43:29.922774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:40.116 [2024-11-29 07:43:29.922829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:40.116 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.116 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:40.116 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.116 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.116 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:40.116 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.116 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.116 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.377 BaseBdev2 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.377 [ 00:11:40.377 { 00:11:40.377 "name": "BaseBdev2", 00:11:40.377 "aliases": [ 00:11:40.377 "84e25e9c-22f1-4125-a204-baf59a5095dd" 00:11:40.377 ], 00:11:40.377 "product_name": "Malloc disk", 00:11:40.377 "block_size": 512, 00:11:40.377 "num_blocks": 65536, 00:11:40.377 "uuid": "84e25e9c-22f1-4125-a204-baf59a5095dd", 00:11:40.377 "assigned_rate_limits": { 00:11:40.377 "rw_ios_per_sec": 0, 00:11:40.377 "rw_mbytes_per_sec": 0, 00:11:40.377 "r_mbytes_per_sec": 0, 00:11:40.377 "w_mbytes_per_sec": 0 00:11:40.377 }, 00:11:40.377 "claimed": false, 00:11:40.377 "zoned": false, 00:11:40.377 "supported_io_types": { 00:11:40.377 "read": true, 00:11:40.377 "write": true, 00:11:40.377 "unmap": true, 00:11:40.377 "flush": true, 00:11:40.377 "reset": true, 00:11:40.377 "nvme_admin": false, 00:11:40.377 "nvme_io": false, 00:11:40.377 "nvme_io_md": false, 00:11:40.377 "write_zeroes": true, 00:11:40.377 "zcopy": true, 00:11:40.377 "get_zone_info": false, 00:11:40.377 "zone_management": false, 00:11:40.377 "zone_append": false, 00:11:40.377 "compare": false, 00:11:40.377 "compare_and_write": false, 00:11:40.377 "abort": true, 00:11:40.377 "seek_hole": false, 00:11:40.377 "seek_data": false, 00:11:40.377 "copy": true, 00:11:40.377 "nvme_iov_md": false 00:11:40.377 }, 00:11:40.377 "memory_domains": [ 00:11:40.377 { 00:11:40.377 "dma_device_id": "system", 00:11:40.377 "dma_device_type": 1 00:11:40.377 }, 00:11:40.377 { 00:11:40.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.377 "dma_device_type": 2 00:11:40.377 } 00:11:40.377 ], 00:11:40.377 "driver_specific": {} 00:11:40.377 } 00:11:40.377 ] 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.377 BaseBdev3 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.377 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.377 [ 00:11:40.377 { 00:11:40.377 "name": "BaseBdev3", 00:11:40.377 "aliases": [ 00:11:40.377 "cd2c6744-6db1-49fc-ae60-b8cf4bbb5ea3" 00:11:40.377 ], 00:11:40.377 "product_name": "Malloc disk", 00:11:40.377 "block_size": 512, 00:11:40.377 "num_blocks": 65536, 00:11:40.377 "uuid": "cd2c6744-6db1-49fc-ae60-b8cf4bbb5ea3", 00:11:40.377 "assigned_rate_limits": { 00:11:40.377 "rw_ios_per_sec": 0, 00:11:40.377 "rw_mbytes_per_sec": 0, 00:11:40.377 "r_mbytes_per_sec": 0, 00:11:40.377 "w_mbytes_per_sec": 0 00:11:40.377 }, 00:11:40.377 "claimed": false, 00:11:40.377 "zoned": false, 00:11:40.377 "supported_io_types": { 00:11:40.377 "read": true, 00:11:40.378 "write": true, 00:11:40.378 "unmap": true, 00:11:40.378 "flush": true, 00:11:40.378 "reset": true, 00:11:40.378 "nvme_admin": false, 00:11:40.378 "nvme_io": false, 00:11:40.378 "nvme_io_md": false, 00:11:40.378 "write_zeroes": true, 00:11:40.378 "zcopy": true, 00:11:40.378 "get_zone_info": false, 00:11:40.378 "zone_management": false, 00:11:40.378 "zone_append": false, 00:11:40.378 "compare": false, 00:11:40.378 "compare_and_write": false, 00:11:40.378 "abort": true, 00:11:40.378 "seek_hole": false, 00:11:40.378 "seek_data": false, 00:11:40.378 "copy": true, 00:11:40.378 "nvme_iov_md": false 00:11:40.378 }, 00:11:40.378 "memory_domains": [ 00:11:40.378 { 00:11:40.378 "dma_device_id": "system", 00:11:40.378 "dma_device_type": 1 00:11:40.378 }, 00:11:40.378 { 00:11:40.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.378 "dma_device_type": 2 00:11:40.378 } 00:11:40.378 ], 00:11:40.378 "driver_specific": {} 00:11:40.378 } 00:11:40.378 ] 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.378 BaseBdev4 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.378 [ 00:11:40.378 { 00:11:40.378 "name": "BaseBdev4", 00:11:40.378 "aliases": [ 00:11:40.378 "7e3700b3-54bd-4cc4-a471-b5263c46d449" 00:11:40.378 ], 00:11:40.378 "product_name": "Malloc disk", 00:11:40.378 "block_size": 512, 00:11:40.378 "num_blocks": 65536, 00:11:40.378 "uuid": "7e3700b3-54bd-4cc4-a471-b5263c46d449", 00:11:40.378 "assigned_rate_limits": { 00:11:40.378 "rw_ios_per_sec": 0, 00:11:40.378 "rw_mbytes_per_sec": 0, 00:11:40.378 "r_mbytes_per_sec": 0, 00:11:40.378 "w_mbytes_per_sec": 0 00:11:40.378 }, 00:11:40.378 "claimed": false, 00:11:40.378 "zoned": false, 00:11:40.378 "supported_io_types": { 00:11:40.378 "read": true, 00:11:40.378 "write": true, 00:11:40.378 "unmap": true, 00:11:40.378 "flush": true, 00:11:40.378 "reset": true, 00:11:40.378 "nvme_admin": false, 00:11:40.378 "nvme_io": false, 00:11:40.378 "nvme_io_md": false, 00:11:40.378 "write_zeroes": true, 00:11:40.378 "zcopy": true, 00:11:40.378 "get_zone_info": false, 00:11:40.378 "zone_management": false, 00:11:40.378 "zone_append": false, 00:11:40.378 "compare": false, 00:11:40.378 "compare_and_write": false, 00:11:40.378 "abort": true, 00:11:40.378 "seek_hole": false, 00:11:40.378 "seek_data": false, 00:11:40.378 "copy": true, 00:11:40.378 "nvme_iov_md": false 00:11:40.378 }, 00:11:40.378 "memory_domains": [ 00:11:40.378 { 00:11:40.378 "dma_device_id": "system", 00:11:40.378 "dma_device_type": 1 00:11:40.378 }, 00:11:40.378 { 00:11:40.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.378 "dma_device_type": 2 00:11:40.378 } 00:11:40.378 ], 00:11:40.378 "driver_specific": {} 00:11:40.378 } 00:11:40.378 ] 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.378 [2024-11-29 07:43:30.304337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.378 [2024-11-29 07:43:30.304382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.378 [2024-11-29 07:43:30.304404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.378 [2024-11-29 07:43:30.306271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.378 [2024-11-29 07:43:30.306344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.378 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.379 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.379 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.379 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.638 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.638 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.638 "name": "Existed_Raid", 00:11:40.638 "uuid": "6068bd0b-5bf4-4f90-b405-5bf9fa55bcb3", 00:11:40.638 "strip_size_kb": 64, 00:11:40.638 "state": "configuring", 00:11:40.638 "raid_level": "concat", 00:11:40.638 "superblock": true, 00:11:40.638 "num_base_bdevs": 4, 00:11:40.638 "num_base_bdevs_discovered": 3, 00:11:40.638 "num_base_bdevs_operational": 4, 00:11:40.638 "base_bdevs_list": [ 00:11:40.638 { 00:11:40.638 "name": "BaseBdev1", 00:11:40.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.638 "is_configured": false, 00:11:40.638 "data_offset": 0, 00:11:40.638 "data_size": 0 00:11:40.638 }, 00:11:40.638 { 00:11:40.638 "name": "BaseBdev2", 00:11:40.638 "uuid": "84e25e9c-22f1-4125-a204-baf59a5095dd", 00:11:40.638 "is_configured": true, 00:11:40.638 "data_offset": 2048, 00:11:40.638 "data_size": 63488 00:11:40.638 }, 00:11:40.638 { 00:11:40.638 "name": "BaseBdev3", 00:11:40.638 "uuid": "cd2c6744-6db1-49fc-ae60-b8cf4bbb5ea3", 00:11:40.638 "is_configured": true, 00:11:40.638 "data_offset": 2048, 00:11:40.638 "data_size": 63488 00:11:40.638 }, 00:11:40.638 { 00:11:40.638 "name": "BaseBdev4", 00:11:40.638 "uuid": "7e3700b3-54bd-4cc4-a471-b5263c46d449", 00:11:40.638 "is_configured": true, 00:11:40.638 "data_offset": 2048, 00:11:40.638 "data_size": 63488 00:11:40.638 } 00:11:40.638 ] 00:11:40.638 }' 00:11:40.638 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.638 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.899 [2024-11-29 07:43:30.743594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.899 "name": "Existed_Raid", 00:11:40.899 "uuid": "6068bd0b-5bf4-4f90-b405-5bf9fa55bcb3", 00:11:40.899 "strip_size_kb": 64, 00:11:40.899 "state": "configuring", 00:11:40.899 "raid_level": "concat", 00:11:40.899 "superblock": true, 00:11:40.899 "num_base_bdevs": 4, 00:11:40.899 "num_base_bdevs_discovered": 2, 00:11:40.899 "num_base_bdevs_operational": 4, 00:11:40.899 "base_bdevs_list": [ 00:11:40.899 { 00:11:40.899 "name": "BaseBdev1", 00:11:40.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.899 "is_configured": false, 00:11:40.899 "data_offset": 0, 00:11:40.899 "data_size": 0 00:11:40.899 }, 00:11:40.899 { 00:11:40.899 "name": null, 00:11:40.899 "uuid": "84e25e9c-22f1-4125-a204-baf59a5095dd", 00:11:40.899 "is_configured": false, 00:11:40.899 "data_offset": 0, 00:11:40.899 "data_size": 63488 00:11:40.899 }, 00:11:40.899 { 00:11:40.899 "name": "BaseBdev3", 00:11:40.899 "uuid": "cd2c6744-6db1-49fc-ae60-b8cf4bbb5ea3", 00:11:40.899 "is_configured": true, 00:11:40.899 "data_offset": 2048, 00:11:40.899 "data_size": 63488 00:11:40.899 }, 00:11:40.899 { 00:11:40.899 "name": "BaseBdev4", 00:11:40.899 "uuid": "7e3700b3-54bd-4cc4-a471-b5263c46d449", 00:11:40.899 "is_configured": true, 00:11:40.899 "data_offset": 2048, 00:11:40.899 "data_size": 63488 00:11:40.899 } 00:11:40.899 ] 00:11:40.899 }' 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.899 07:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.468 [2024-11-29 07:43:31.267959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.468 BaseBdev1 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.468 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.468 [ 00:11:41.468 { 00:11:41.468 "name": "BaseBdev1", 00:11:41.468 "aliases": [ 00:11:41.468 "abaa552e-fb82-4977-bfa3-3c8ba65e958e" 00:11:41.468 ], 00:11:41.469 "product_name": "Malloc disk", 00:11:41.469 "block_size": 512, 00:11:41.469 "num_blocks": 65536, 00:11:41.469 "uuid": "abaa552e-fb82-4977-bfa3-3c8ba65e958e", 00:11:41.469 "assigned_rate_limits": { 00:11:41.469 "rw_ios_per_sec": 0, 00:11:41.469 "rw_mbytes_per_sec": 0, 00:11:41.469 "r_mbytes_per_sec": 0, 00:11:41.469 "w_mbytes_per_sec": 0 00:11:41.469 }, 00:11:41.469 "claimed": true, 00:11:41.469 "claim_type": "exclusive_write", 00:11:41.469 "zoned": false, 00:11:41.469 "supported_io_types": { 00:11:41.469 "read": true, 00:11:41.469 "write": true, 00:11:41.469 "unmap": true, 00:11:41.469 "flush": true, 00:11:41.469 "reset": true, 00:11:41.469 "nvme_admin": false, 00:11:41.469 "nvme_io": false, 00:11:41.469 "nvme_io_md": false, 00:11:41.469 "write_zeroes": true, 00:11:41.469 "zcopy": true, 00:11:41.469 "get_zone_info": false, 00:11:41.469 "zone_management": false, 00:11:41.469 "zone_append": false, 00:11:41.469 "compare": false, 00:11:41.469 "compare_and_write": false, 00:11:41.469 "abort": true, 00:11:41.469 "seek_hole": false, 00:11:41.469 "seek_data": false, 00:11:41.469 "copy": true, 00:11:41.469 "nvme_iov_md": false 00:11:41.469 }, 00:11:41.469 "memory_domains": [ 00:11:41.469 { 00:11:41.469 "dma_device_id": "system", 00:11:41.469 "dma_device_type": 1 00:11:41.469 }, 00:11:41.469 { 00:11:41.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.469 "dma_device_type": 2 00:11:41.469 } 00:11:41.469 ], 00:11:41.469 "driver_specific": {} 00:11:41.469 } 00:11:41.469 ] 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.469 "name": "Existed_Raid", 00:11:41.469 "uuid": "6068bd0b-5bf4-4f90-b405-5bf9fa55bcb3", 00:11:41.469 "strip_size_kb": 64, 00:11:41.469 "state": "configuring", 00:11:41.469 "raid_level": "concat", 00:11:41.469 "superblock": true, 00:11:41.469 "num_base_bdevs": 4, 00:11:41.469 "num_base_bdevs_discovered": 3, 00:11:41.469 "num_base_bdevs_operational": 4, 00:11:41.469 "base_bdevs_list": [ 00:11:41.469 { 00:11:41.469 "name": "BaseBdev1", 00:11:41.469 "uuid": "abaa552e-fb82-4977-bfa3-3c8ba65e958e", 00:11:41.469 "is_configured": true, 00:11:41.469 "data_offset": 2048, 00:11:41.469 "data_size": 63488 00:11:41.469 }, 00:11:41.469 { 00:11:41.469 "name": null, 00:11:41.469 "uuid": "84e25e9c-22f1-4125-a204-baf59a5095dd", 00:11:41.469 "is_configured": false, 00:11:41.469 "data_offset": 0, 00:11:41.469 "data_size": 63488 00:11:41.469 }, 00:11:41.469 { 00:11:41.469 "name": "BaseBdev3", 00:11:41.469 "uuid": "cd2c6744-6db1-49fc-ae60-b8cf4bbb5ea3", 00:11:41.469 "is_configured": true, 00:11:41.469 "data_offset": 2048, 00:11:41.469 "data_size": 63488 00:11:41.469 }, 00:11:41.469 { 00:11:41.469 "name": "BaseBdev4", 00:11:41.469 "uuid": "7e3700b3-54bd-4cc4-a471-b5263c46d449", 00:11:41.469 "is_configured": true, 00:11:41.469 "data_offset": 2048, 00:11:41.469 "data_size": 63488 00:11:41.469 } 00:11:41.469 ] 00:11:41.469 }' 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.469 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.038 [2024-11-29 07:43:31.779197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.038 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.038 "name": "Existed_Raid", 00:11:42.038 "uuid": "6068bd0b-5bf4-4f90-b405-5bf9fa55bcb3", 00:11:42.038 "strip_size_kb": 64, 00:11:42.038 "state": "configuring", 00:11:42.038 "raid_level": "concat", 00:11:42.038 "superblock": true, 00:11:42.038 "num_base_bdevs": 4, 00:11:42.038 "num_base_bdevs_discovered": 2, 00:11:42.038 "num_base_bdevs_operational": 4, 00:11:42.038 "base_bdevs_list": [ 00:11:42.038 { 00:11:42.038 "name": "BaseBdev1", 00:11:42.038 "uuid": "abaa552e-fb82-4977-bfa3-3c8ba65e958e", 00:11:42.038 "is_configured": true, 00:11:42.038 "data_offset": 2048, 00:11:42.038 "data_size": 63488 00:11:42.038 }, 00:11:42.038 { 00:11:42.038 "name": null, 00:11:42.038 "uuid": "84e25e9c-22f1-4125-a204-baf59a5095dd", 00:11:42.038 "is_configured": false, 00:11:42.039 "data_offset": 0, 00:11:42.039 "data_size": 63488 00:11:42.039 }, 00:11:42.039 { 00:11:42.039 "name": null, 00:11:42.039 "uuid": "cd2c6744-6db1-49fc-ae60-b8cf4bbb5ea3", 00:11:42.039 "is_configured": false, 00:11:42.039 "data_offset": 0, 00:11:42.039 "data_size": 63488 00:11:42.039 }, 00:11:42.039 { 00:11:42.039 "name": "BaseBdev4", 00:11:42.039 "uuid": "7e3700b3-54bd-4cc4-a471-b5263c46d449", 00:11:42.039 "is_configured": true, 00:11:42.039 "data_offset": 2048, 00:11:42.039 "data_size": 63488 00:11:42.039 } 00:11:42.039 ] 00:11:42.039 }' 00:11:42.039 07:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.039 07:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.298 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.299 [2024-11-29 07:43:32.222395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.299 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.558 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.558 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.558 "name": "Existed_Raid", 00:11:42.558 "uuid": "6068bd0b-5bf4-4f90-b405-5bf9fa55bcb3", 00:11:42.558 "strip_size_kb": 64, 00:11:42.558 "state": "configuring", 00:11:42.558 "raid_level": "concat", 00:11:42.558 "superblock": true, 00:11:42.558 "num_base_bdevs": 4, 00:11:42.558 "num_base_bdevs_discovered": 3, 00:11:42.558 "num_base_bdevs_operational": 4, 00:11:42.558 "base_bdevs_list": [ 00:11:42.558 { 00:11:42.558 "name": "BaseBdev1", 00:11:42.558 "uuid": "abaa552e-fb82-4977-bfa3-3c8ba65e958e", 00:11:42.558 "is_configured": true, 00:11:42.558 "data_offset": 2048, 00:11:42.558 "data_size": 63488 00:11:42.558 }, 00:11:42.558 { 00:11:42.558 "name": null, 00:11:42.558 "uuid": "84e25e9c-22f1-4125-a204-baf59a5095dd", 00:11:42.558 "is_configured": false, 00:11:42.558 "data_offset": 0, 00:11:42.558 "data_size": 63488 00:11:42.558 }, 00:11:42.558 { 00:11:42.558 "name": "BaseBdev3", 00:11:42.558 "uuid": "cd2c6744-6db1-49fc-ae60-b8cf4bbb5ea3", 00:11:42.558 "is_configured": true, 00:11:42.558 "data_offset": 2048, 00:11:42.558 "data_size": 63488 00:11:42.558 }, 00:11:42.558 { 00:11:42.558 "name": "BaseBdev4", 00:11:42.558 "uuid": "7e3700b3-54bd-4cc4-a471-b5263c46d449", 00:11:42.558 "is_configured": true, 00:11:42.558 "data_offset": 2048, 00:11:42.558 "data_size": 63488 00:11:42.558 } 00:11:42.558 ] 00:11:42.558 }' 00:11:42.558 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.558 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.828 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.828 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.828 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.828 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.828 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.828 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:42.828 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:42.828 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.828 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.828 [2024-11-29 07:43:32.685661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.129 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.129 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.129 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.129 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.129 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.129 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.129 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.129 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.129 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.129 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.129 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.130 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.130 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.130 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.130 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.130 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.130 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.130 "name": "Existed_Raid", 00:11:43.130 "uuid": "6068bd0b-5bf4-4f90-b405-5bf9fa55bcb3", 00:11:43.130 "strip_size_kb": 64, 00:11:43.130 "state": "configuring", 00:11:43.130 "raid_level": "concat", 00:11:43.130 "superblock": true, 00:11:43.130 "num_base_bdevs": 4, 00:11:43.130 "num_base_bdevs_discovered": 2, 00:11:43.130 "num_base_bdevs_operational": 4, 00:11:43.130 "base_bdevs_list": [ 00:11:43.130 { 00:11:43.130 "name": null, 00:11:43.130 "uuid": "abaa552e-fb82-4977-bfa3-3c8ba65e958e", 00:11:43.130 "is_configured": false, 00:11:43.130 "data_offset": 0, 00:11:43.130 "data_size": 63488 00:11:43.130 }, 00:11:43.130 { 00:11:43.130 "name": null, 00:11:43.130 "uuid": "84e25e9c-22f1-4125-a204-baf59a5095dd", 00:11:43.130 "is_configured": false, 00:11:43.130 "data_offset": 0, 00:11:43.130 "data_size": 63488 00:11:43.130 }, 00:11:43.130 { 00:11:43.130 "name": "BaseBdev3", 00:11:43.130 "uuid": "cd2c6744-6db1-49fc-ae60-b8cf4bbb5ea3", 00:11:43.130 "is_configured": true, 00:11:43.130 "data_offset": 2048, 00:11:43.130 "data_size": 63488 00:11:43.130 }, 00:11:43.130 { 00:11:43.130 "name": "BaseBdev4", 00:11:43.130 "uuid": "7e3700b3-54bd-4cc4-a471-b5263c46d449", 00:11:43.130 "is_configured": true, 00:11:43.130 "data_offset": 2048, 00:11:43.130 "data_size": 63488 00:11:43.130 } 00:11:43.130 ] 00:11:43.130 }' 00:11:43.130 07:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.130 07:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.398 [2024-11-29 07:43:33.262174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.398 "name": "Existed_Raid", 00:11:43.398 "uuid": "6068bd0b-5bf4-4f90-b405-5bf9fa55bcb3", 00:11:43.398 "strip_size_kb": 64, 00:11:43.398 "state": "configuring", 00:11:43.398 "raid_level": "concat", 00:11:43.398 "superblock": true, 00:11:43.398 "num_base_bdevs": 4, 00:11:43.398 "num_base_bdevs_discovered": 3, 00:11:43.398 "num_base_bdevs_operational": 4, 00:11:43.398 "base_bdevs_list": [ 00:11:43.398 { 00:11:43.398 "name": null, 00:11:43.398 "uuid": "abaa552e-fb82-4977-bfa3-3c8ba65e958e", 00:11:43.398 "is_configured": false, 00:11:43.398 "data_offset": 0, 00:11:43.398 "data_size": 63488 00:11:43.398 }, 00:11:43.398 { 00:11:43.398 "name": "BaseBdev2", 00:11:43.398 "uuid": "84e25e9c-22f1-4125-a204-baf59a5095dd", 00:11:43.398 "is_configured": true, 00:11:43.398 "data_offset": 2048, 00:11:43.398 "data_size": 63488 00:11:43.398 }, 00:11:43.398 { 00:11:43.398 "name": "BaseBdev3", 00:11:43.398 "uuid": "cd2c6744-6db1-49fc-ae60-b8cf4bbb5ea3", 00:11:43.398 "is_configured": true, 00:11:43.398 "data_offset": 2048, 00:11:43.398 "data_size": 63488 00:11:43.398 }, 00:11:43.398 { 00:11:43.398 "name": "BaseBdev4", 00:11:43.398 "uuid": "7e3700b3-54bd-4cc4-a471-b5263c46d449", 00:11:43.398 "is_configured": true, 00:11:43.398 "data_offset": 2048, 00:11:43.398 "data_size": 63488 00:11:43.398 } 00:11:43.398 ] 00:11:43.398 }' 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.398 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.967 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u abaa552e-fb82-4977-bfa3-3c8ba65e958e 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.968 [2024-11-29 07:43:33.818478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:43.968 [2024-11-29 07:43:33.818734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:43.968 [2024-11-29 07:43:33.818748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:43.968 [2024-11-29 07:43:33.819019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:43.968 [2024-11-29 07:43:33.819179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:43.968 [2024-11-29 07:43:33.819199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:43.968 [2024-11-29 07:43:33.819331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.968 NewBaseBdev 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.968 [ 00:11:43.968 { 00:11:43.968 "name": "NewBaseBdev", 00:11:43.968 "aliases": [ 00:11:43.968 "abaa552e-fb82-4977-bfa3-3c8ba65e958e" 00:11:43.968 ], 00:11:43.968 "product_name": "Malloc disk", 00:11:43.968 "block_size": 512, 00:11:43.968 "num_blocks": 65536, 00:11:43.968 "uuid": "abaa552e-fb82-4977-bfa3-3c8ba65e958e", 00:11:43.968 "assigned_rate_limits": { 00:11:43.968 "rw_ios_per_sec": 0, 00:11:43.968 "rw_mbytes_per_sec": 0, 00:11:43.968 "r_mbytes_per_sec": 0, 00:11:43.968 "w_mbytes_per_sec": 0 00:11:43.968 }, 00:11:43.968 "claimed": true, 00:11:43.968 "claim_type": "exclusive_write", 00:11:43.968 "zoned": false, 00:11:43.968 "supported_io_types": { 00:11:43.968 "read": true, 00:11:43.968 "write": true, 00:11:43.968 "unmap": true, 00:11:43.968 "flush": true, 00:11:43.968 "reset": true, 00:11:43.968 "nvme_admin": false, 00:11:43.968 "nvme_io": false, 00:11:43.968 "nvme_io_md": false, 00:11:43.968 "write_zeroes": true, 00:11:43.968 "zcopy": true, 00:11:43.968 "get_zone_info": false, 00:11:43.968 "zone_management": false, 00:11:43.968 "zone_append": false, 00:11:43.968 "compare": false, 00:11:43.968 "compare_and_write": false, 00:11:43.968 "abort": true, 00:11:43.968 "seek_hole": false, 00:11:43.968 "seek_data": false, 00:11:43.968 "copy": true, 00:11:43.968 "nvme_iov_md": false 00:11:43.968 }, 00:11:43.968 "memory_domains": [ 00:11:43.968 { 00:11:43.968 "dma_device_id": "system", 00:11:43.968 "dma_device_type": 1 00:11:43.968 }, 00:11:43.968 { 00:11:43.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.968 "dma_device_type": 2 00:11:43.968 } 00:11:43.968 ], 00:11:43.968 "driver_specific": {} 00:11:43.968 } 00:11:43.968 ] 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.968 "name": "Existed_Raid", 00:11:43.968 "uuid": "6068bd0b-5bf4-4f90-b405-5bf9fa55bcb3", 00:11:43.968 "strip_size_kb": 64, 00:11:43.968 "state": "online", 00:11:43.968 "raid_level": "concat", 00:11:43.968 "superblock": true, 00:11:43.968 "num_base_bdevs": 4, 00:11:43.968 "num_base_bdevs_discovered": 4, 00:11:43.968 "num_base_bdevs_operational": 4, 00:11:43.968 "base_bdevs_list": [ 00:11:43.968 { 00:11:43.968 "name": "NewBaseBdev", 00:11:43.968 "uuid": "abaa552e-fb82-4977-bfa3-3c8ba65e958e", 00:11:43.968 "is_configured": true, 00:11:43.968 "data_offset": 2048, 00:11:43.968 "data_size": 63488 00:11:43.968 }, 00:11:43.968 { 00:11:43.968 "name": "BaseBdev2", 00:11:43.968 "uuid": "84e25e9c-22f1-4125-a204-baf59a5095dd", 00:11:43.968 "is_configured": true, 00:11:43.968 "data_offset": 2048, 00:11:43.968 "data_size": 63488 00:11:43.968 }, 00:11:43.968 { 00:11:43.968 "name": "BaseBdev3", 00:11:43.968 "uuid": "cd2c6744-6db1-49fc-ae60-b8cf4bbb5ea3", 00:11:43.968 "is_configured": true, 00:11:43.968 "data_offset": 2048, 00:11:43.968 "data_size": 63488 00:11:43.968 }, 00:11:43.968 { 00:11:43.968 "name": "BaseBdev4", 00:11:43.968 "uuid": "7e3700b3-54bd-4cc4-a471-b5263c46d449", 00:11:43.968 "is_configured": true, 00:11:43.968 "data_offset": 2048, 00:11:43.968 "data_size": 63488 00:11:43.968 } 00:11:43.968 ] 00:11:43.968 }' 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.968 07:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.537 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.537 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:44.537 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.537 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.537 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.537 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.537 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.537 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:44.537 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.537 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.537 [2024-11-29 07:43:34.274067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.537 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.537 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.537 "name": "Existed_Raid", 00:11:44.537 "aliases": [ 00:11:44.537 "6068bd0b-5bf4-4f90-b405-5bf9fa55bcb3" 00:11:44.537 ], 00:11:44.537 "product_name": "Raid Volume", 00:11:44.537 "block_size": 512, 00:11:44.537 "num_blocks": 253952, 00:11:44.537 "uuid": "6068bd0b-5bf4-4f90-b405-5bf9fa55bcb3", 00:11:44.537 "assigned_rate_limits": { 00:11:44.537 "rw_ios_per_sec": 0, 00:11:44.537 "rw_mbytes_per_sec": 0, 00:11:44.537 "r_mbytes_per_sec": 0, 00:11:44.537 "w_mbytes_per_sec": 0 00:11:44.537 }, 00:11:44.537 "claimed": false, 00:11:44.537 "zoned": false, 00:11:44.537 "supported_io_types": { 00:11:44.537 "read": true, 00:11:44.537 "write": true, 00:11:44.537 "unmap": true, 00:11:44.537 "flush": true, 00:11:44.537 "reset": true, 00:11:44.537 "nvme_admin": false, 00:11:44.537 "nvme_io": false, 00:11:44.537 "nvme_io_md": false, 00:11:44.537 "write_zeroes": true, 00:11:44.537 "zcopy": false, 00:11:44.537 "get_zone_info": false, 00:11:44.537 "zone_management": false, 00:11:44.537 "zone_append": false, 00:11:44.537 "compare": false, 00:11:44.537 "compare_and_write": false, 00:11:44.537 "abort": false, 00:11:44.537 "seek_hole": false, 00:11:44.537 "seek_data": false, 00:11:44.537 "copy": false, 00:11:44.537 "nvme_iov_md": false 00:11:44.537 }, 00:11:44.537 "memory_domains": [ 00:11:44.537 { 00:11:44.537 "dma_device_id": "system", 00:11:44.537 "dma_device_type": 1 00:11:44.537 }, 00:11:44.537 { 00:11:44.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.537 "dma_device_type": 2 00:11:44.537 }, 00:11:44.537 { 00:11:44.537 "dma_device_id": "system", 00:11:44.537 "dma_device_type": 1 00:11:44.537 }, 00:11:44.537 { 00:11:44.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.537 "dma_device_type": 2 00:11:44.537 }, 00:11:44.537 { 00:11:44.537 "dma_device_id": "system", 00:11:44.537 "dma_device_type": 1 00:11:44.537 }, 00:11:44.537 { 00:11:44.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.537 "dma_device_type": 2 00:11:44.537 }, 00:11:44.537 { 00:11:44.537 "dma_device_id": "system", 00:11:44.537 "dma_device_type": 1 00:11:44.537 }, 00:11:44.537 { 00:11:44.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.537 "dma_device_type": 2 00:11:44.537 } 00:11:44.537 ], 00:11:44.537 "driver_specific": { 00:11:44.537 "raid": { 00:11:44.537 "uuid": "6068bd0b-5bf4-4f90-b405-5bf9fa55bcb3", 00:11:44.537 "strip_size_kb": 64, 00:11:44.537 "state": "online", 00:11:44.537 "raid_level": "concat", 00:11:44.537 "superblock": true, 00:11:44.537 "num_base_bdevs": 4, 00:11:44.537 "num_base_bdevs_discovered": 4, 00:11:44.537 "num_base_bdevs_operational": 4, 00:11:44.537 "base_bdevs_list": [ 00:11:44.538 { 00:11:44.538 "name": "NewBaseBdev", 00:11:44.538 "uuid": "abaa552e-fb82-4977-bfa3-3c8ba65e958e", 00:11:44.538 "is_configured": true, 00:11:44.538 "data_offset": 2048, 00:11:44.538 "data_size": 63488 00:11:44.538 }, 00:11:44.538 { 00:11:44.538 "name": "BaseBdev2", 00:11:44.538 "uuid": "84e25e9c-22f1-4125-a204-baf59a5095dd", 00:11:44.538 "is_configured": true, 00:11:44.538 "data_offset": 2048, 00:11:44.538 "data_size": 63488 00:11:44.538 }, 00:11:44.538 { 00:11:44.538 "name": "BaseBdev3", 00:11:44.538 "uuid": "cd2c6744-6db1-49fc-ae60-b8cf4bbb5ea3", 00:11:44.538 "is_configured": true, 00:11:44.538 "data_offset": 2048, 00:11:44.538 "data_size": 63488 00:11:44.538 }, 00:11:44.538 { 00:11:44.538 "name": "BaseBdev4", 00:11:44.538 "uuid": "7e3700b3-54bd-4cc4-a471-b5263c46d449", 00:11:44.538 "is_configured": true, 00:11:44.538 "data_offset": 2048, 00:11:44.538 "data_size": 63488 00:11:44.538 } 00:11:44.538 ] 00:11:44.538 } 00:11:44.538 } 00:11:44.538 }' 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:44.538 BaseBdev2 00:11:44.538 BaseBdev3 00:11:44.538 BaseBdev4' 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.538 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.798 [2024-11-29 07:43:34.565208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.798 [2024-11-29 07:43:34.565253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.798 [2024-11-29 07:43:34.565348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.798 [2024-11-29 07:43:34.565414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.798 [2024-11-29 07:43:34.565424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71721 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71721 ']' 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71721 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71721 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.798 killing process with pid 71721 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71721' 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71721 00:11:44.798 [2024-11-29 07:43:34.614276] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.798 07:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71721 00:11:45.058 [2024-11-29 07:43:34.999280] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.441 07:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:46.441 00:11:46.441 real 0m11.398s 00:11:46.441 user 0m18.197s 00:11:46.441 sys 0m1.990s 00:11:46.441 07:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.441 07:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.441 ************************************ 00:11:46.441 END TEST raid_state_function_test_sb 00:11:46.441 ************************************ 00:11:46.441 07:43:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:46.441 07:43:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:46.441 07:43:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.441 07:43:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.441 ************************************ 00:11:46.441 START TEST raid_superblock_test 00:11:46.441 ************************************ 00:11:46.441 07:43:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72394 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72394 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72394 ']' 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.442 07:43:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.442 [2024-11-29 07:43:36.258154] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:46.442 [2024-11-29 07:43:36.258271] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72394 ] 00:11:46.702 [2024-11-29 07:43:36.410227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.702 [2024-11-29 07:43:36.518764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.961 [2024-11-29 07:43:36.707111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.961 [2024-11-29 07:43:36.707178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.221 malloc1 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.221 [2024-11-29 07:43:37.135166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.221 [2024-11-29 07:43:37.135242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.221 [2024-11-29 07:43:37.135263] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:47.221 [2024-11-29 07:43:37.135272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.221 [2024-11-29 07:43:37.137316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.221 [2024-11-29 07:43:37.137352] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.221 pt1 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.221 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.481 malloc2 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.481 [2024-11-29 07:43:37.187972] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.481 [2024-11-29 07:43:37.188026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.481 [2024-11-29 07:43:37.188065] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:47.481 [2024-11-29 07:43:37.188074] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.481 [2024-11-29 07:43:37.190206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.481 [2024-11-29 07:43:37.190238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.481 pt2 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.481 malloc3 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.481 [2024-11-29 07:43:37.261061] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.481 [2024-11-29 07:43:37.261127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.481 [2024-11-29 07:43:37.261149] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:47.481 [2024-11-29 07:43:37.261158] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.481 [2024-11-29 07:43:37.263226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.481 [2024-11-29 07:43:37.263263] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.481 pt3 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.481 malloc4 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.481 [2024-11-29 07:43:37.313427] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:47.481 [2024-11-29 07:43:37.313482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.481 [2024-11-29 07:43:37.313507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:47.481 [2024-11-29 07:43:37.313515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.481 [2024-11-29 07:43:37.315504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.481 [2024-11-29 07:43:37.315539] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:47.481 pt4 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.481 [2024-11-29 07:43:37.325439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.481 [2024-11-29 07:43:37.327163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.481 [2024-11-29 07:43:37.327250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.481 [2024-11-29 07:43:37.327297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:47.481 [2024-11-29 07:43:37.327478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:47.481 [2024-11-29 07:43:37.327496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:47.481 [2024-11-29 07:43:37.327745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:47.481 [2024-11-29 07:43:37.327938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:47.481 [2024-11-29 07:43:37.327958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:47.481 [2024-11-29 07:43:37.328112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.481 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.482 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.482 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.482 "name": "raid_bdev1", 00:11:47.482 "uuid": "b9fddee2-49f4-43e9-8258-a925c305fc26", 00:11:47.482 "strip_size_kb": 64, 00:11:47.482 "state": "online", 00:11:47.482 "raid_level": "concat", 00:11:47.482 "superblock": true, 00:11:47.482 "num_base_bdevs": 4, 00:11:47.482 "num_base_bdevs_discovered": 4, 00:11:47.482 "num_base_bdevs_operational": 4, 00:11:47.482 "base_bdevs_list": [ 00:11:47.482 { 00:11:47.482 "name": "pt1", 00:11:47.482 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.482 "is_configured": true, 00:11:47.482 "data_offset": 2048, 00:11:47.482 "data_size": 63488 00:11:47.482 }, 00:11:47.482 { 00:11:47.482 "name": "pt2", 00:11:47.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.482 "is_configured": true, 00:11:47.482 "data_offset": 2048, 00:11:47.482 "data_size": 63488 00:11:47.482 }, 00:11:47.482 { 00:11:47.482 "name": "pt3", 00:11:47.482 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.482 "is_configured": true, 00:11:47.482 "data_offset": 2048, 00:11:47.482 "data_size": 63488 00:11:47.482 }, 00:11:47.482 { 00:11:47.482 "name": "pt4", 00:11:47.482 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.482 "is_configured": true, 00:11:47.482 "data_offset": 2048, 00:11:47.482 "data_size": 63488 00:11:47.482 } 00:11:47.482 ] 00:11:47.482 }' 00:11:47.482 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.482 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.051 [2024-11-29 07:43:37.749042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.051 "name": "raid_bdev1", 00:11:48.051 "aliases": [ 00:11:48.051 "b9fddee2-49f4-43e9-8258-a925c305fc26" 00:11:48.051 ], 00:11:48.051 "product_name": "Raid Volume", 00:11:48.051 "block_size": 512, 00:11:48.051 "num_blocks": 253952, 00:11:48.051 "uuid": "b9fddee2-49f4-43e9-8258-a925c305fc26", 00:11:48.051 "assigned_rate_limits": { 00:11:48.051 "rw_ios_per_sec": 0, 00:11:48.051 "rw_mbytes_per_sec": 0, 00:11:48.051 "r_mbytes_per_sec": 0, 00:11:48.051 "w_mbytes_per_sec": 0 00:11:48.051 }, 00:11:48.051 "claimed": false, 00:11:48.051 "zoned": false, 00:11:48.051 "supported_io_types": { 00:11:48.051 "read": true, 00:11:48.051 "write": true, 00:11:48.051 "unmap": true, 00:11:48.051 "flush": true, 00:11:48.051 "reset": true, 00:11:48.051 "nvme_admin": false, 00:11:48.051 "nvme_io": false, 00:11:48.051 "nvme_io_md": false, 00:11:48.051 "write_zeroes": true, 00:11:48.051 "zcopy": false, 00:11:48.051 "get_zone_info": false, 00:11:48.051 "zone_management": false, 00:11:48.051 "zone_append": false, 00:11:48.051 "compare": false, 00:11:48.051 "compare_and_write": false, 00:11:48.051 "abort": false, 00:11:48.051 "seek_hole": false, 00:11:48.051 "seek_data": false, 00:11:48.051 "copy": false, 00:11:48.051 "nvme_iov_md": false 00:11:48.051 }, 00:11:48.051 "memory_domains": [ 00:11:48.051 { 00:11:48.051 "dma_device_id": "system", 00:11:48.051 "dma_device_type": 1 00:11:48.051 }, 00:11:48.051 { 00:11:48.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.051 "dma_device_type": 2 00:11:48.051 }, 00:11:48.051 { 00:11:48.051 "dma_device_id": "system", 00:11:48.051 "dma_device_type": 1 00:11:48.051 }, 00:11:48.051 { 00:11:48.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.051 "dma_device_type": 2 00:11:48.051 }, 00:11:48.051 { 00:11:48.051 "dma_device_id": "system", 00:11:48.051 "dma_device_type": 1 00:11:48.051 }, 00:11:48.051 { 00:11:48.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.051 "dma_device_type": 2 00:11:48.051 }, 00:11:48.051 { 00:11:48.051 "dma_device_id": "system", 00:11:48.051 "dma_device_type": 1 00:11:48.051 }, 00:11:48.051 { 00:11:48.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.051 "dma_device_type": 2 00:11:48.051 } 00:11:48.051 ], 00:11:48.051 "driver_specific": { 00:11:48.051 "raid": { 00:11:48.051 "uuid": "b9fddee2-49f4-43e9-8258-a925c305fc26", 00:11:48.051 "strip_size_kb": 64, 00:11:48.051 "state": "online", 00:11:48.051 "raid_level": "concat", 00:11:48.051 "superblock": true, 00:11:48.051 "num_base_bdevs": 4, 00:11:48.051 "num_base_bdevs_discovered": 4, 00:11:48.051 "num_base_bdevs_operational": 4, 00:11:48.051 "base_bdevs_list": [ 00:11:48.051 { 00:11:48.051 "name": "pt1", 00:11:48.051 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.051 "is_configured": true, 00:11:48.051 "data_offset": 2048, 00:11:48.051 "data_size": 63488 00:11:48.051 }, 00:11:48.051 { 00:11:48.051 "name": "pt2", 00:11:48.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.051 "is_configured": true, 00:11:48.051 "data_offset": 2048, 00:11:48.051 "data_size": 63488 00:11:48.051 }, 00:11:48.051 { 00:11:48.051 "name": "pt3", 00:11:48.051 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.051 "is_configured": true, 00:11:48.051 "data_offset": 2048, 00:11:48.051 "data_size": 63488 00:11:48.051 }, 00:11:48.051 { 00:11:48.051 "name": "pt4", 00:11:48.051 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.051 "is_configured": true, 00:11:48.051 "data_offset": 2048, 00:11:48.051 "data_size": 63488 00:11:48.051 } 00:11:48.051 ] 00:11:48.051 } 00:11:48.051 } 00:11:48.051 }' 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:48.051 pt2 00:11:48.051 pt3 00:11:48.051 pt4' 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.051 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.052 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.052 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.052 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.052 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:48.052 07:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.052 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.052 07:43:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.311 [2024-11-29 07:43:38.092503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b9fddee2-49f4-43e9-8258-a925c305fc26 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b9fddee2-49f4-43e9-8258-a925c305fc26 ']' 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.311 [2024-11-29 07:43:38.136074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.311 [2024-11-29 07:43:38.136118] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.311 [2024-11-29 07:43:38.136216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.311 [2024-11-29 07:43:38.136300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.311 [2024-11-29 07:43:38.136320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.311 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.571 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.572 [2024-11-29 07:43:38.279960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:48.572 [2024-11-29 07:43:38.282090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:48.572 [2024-11-29 07:43:38.282161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:48.572 [2024-11-29 07:43:38.282200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:48.572 [2024-11-29 07:43:38.282259] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:48.572 [2024-11-29 07:43:38.282320] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:48.572 [2024-11-29 07:43:38.282346] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:48.572 [2024-11-29 07:43:38.282367] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:48.572 [2024-11-29 07:43:38.282381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.572 [2024-11-29 07:43:38.282394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:48.572 request: 00:11:48.572 { 00:11:48.572 "name": "raid_bdev1", 00:11:48.572 "raid_level": "concat", 00:11:48.572 "base_bdevs": [ 00:11:48.572 "malloc1", 00:11:48.572 "malloc2", 00:11:48.572 "malloc3", 00:11:48.572 "malloc4" 00:11:48.572 ], 00:11:48.572 "strip_size_kb": 64, 00:11:48.572 "superblock": false, 00:11:48.572 "method": "bdev_raid_create", 00:11:48.572 "req_id": 1 00:11:48.572 } 00:11:48.572 Got JSON-RPC error response 00:11:48.572 response: 00:11:48.572 { 00:11:48.572 "code": -17, 00:11:48.572 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:48.572 } 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.572 [2024-11-29 07:43:38.347747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:48.572 [2024-11-29 07:43:38.347886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.572 [2024-11-29 07:43:38.347917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:48.572 [2024-11-29 07:43:38.347931] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.572 [2024-11-29 07:43:38.350341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.572 [2024-11-29 07:43:38.350384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:48.572 [2024-11-29 07:43:38.350475] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:48.572 [2024-11-29 07:43:38.350542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:48.572 pt1 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.572 "name": "raid_bdev1", 00:11:48.572 "uuid": "b9fddee2-49f4-43e9-8258-a925c305fc26", 00:11:48.572 "strip_size_kb": 64, 00:11:48.572 "state": "configuring", 00:11:48.572 "raid_level": "concat", 00:11:48.572 "superblock": true, 00:11:48.572 "num_base_bdevs": 4, 00:11:48.572 "num_base_bdevs_discovered": 1, 00:11:48.572 "num_base_bdevs_operational": 4, 00:11:48.572 "base_bdevs_list": [ 00:11:48.572 { 00:11:48.572 "name": "pt1", 00:11:48.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.572 "is_configured": true, 00:11:48.572 "data_offset": 2048, 00:11:48.572 "data_size": 63488 00:11:48.572 }, 00:11:48.572 { 00:11:48.572 "name": null, 00:11:48.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.572 "is_configured": false, 00:11:48.572 "data_offset": 2048, 00:11:48.572 "data_size": 63488 00:11:48.572 }, 00:11:48.572 { 00:11:48.572 "name": null, 00:11:48.572 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.572 "is_configured": false, 00:11:48.572 "data_offset": 2048, 00:11:48.572 "data_size": 63488 00:11:48.572 }, 00:11:48.572 { 00:11:48.572 "name": null, 00:11:48.572 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.572 "is_configured": false, 00:11:48.572 "data_offset": 2048, 00:11:48.572 "data_size": 63488 00:11:48.572 } 00:11:48.572 ] 00:11:48.572 }' 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.572 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.142 [2024-11-29 07:43:38.799006] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.142 [2024-11-29 07:43:38.799085] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.142 [2024-11-29 07:43:38.799118] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:49.142 [2024-11-29 07:43:38.799130] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.142 [2024-11-29 07:43:38.799618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.142 [2024-11-29 07:43:38.799647] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.142 [2024-11-29 07:43:38.799738] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:49.142 [2024-11-29 07:43:38.799764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.142 pt2 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.142 [2024-11-29 07:43:38.810990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.142 "name": "raid_bdev1", 00:11:49.142 "uuid": "b9fddee2-49f4-43e9-8258-a925c305fc26", 00:11:49.142 "strip_size_kb": 64, 00:11:49.142 "state": "configuring", 00:11:49.142 "raid_level": "concat", 00:11:49.142 "superblock": true, 00:11:49.142 "num_base_bdevs": 4, 00:11:49.142 "num_base_bdevs_discovered": 1, 00:11:49.142 "num_base_bdevs_operational": 4, 00:11:49.142 "base_bdevs_list": [ 00:11:49.142 { 00:11:49.142 "name": "pt1", 00:11:49.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.142 "is_configured": true, 00:11:49.142 "data_offset": 2048, 00:11:49.142 "data_size": 63488 00:11:49.142 }, 00:11:49.142 { 00:11:49.142 "name": null, 00:11:49.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.142 "is_configured": false, 00:11:49.142 "data_offset": 0, 00:11:49.142 "data_size": 63488 00:11:49.142 }, 00:11:49.142 { 00:11:49.142 "name": null, 00:11:49.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.142 "is_configured": false, 00:11:49.142 "data_offset": 2048, 00:11:49.142 "data_size": 63488 00:11:49.142 }, 00:11:49.142 { 00:11:49.142 "name": null, 00:11:49.142 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.142 "is_configured": false, 00:11:49.142 "data_offset": 2048, 00:11:49.142 "data_size": 63488 00:11:49.142 } 00:11:49.142 ] 00:11:49.142 }' 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.142 07:43:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.402 [2024-11-29 07:43:39.238256] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.402 [2024-11-29 07:43:39.238372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.402 [2024-11-29 07:43:39.238412] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:49.402 [2024-11-29 07:43:39.238440] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.402 [2024-11-29 07:43:39.238917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.402 [2024-11-29 07:43:39.238982] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.402 [2024-11-29 07:43:39.239109] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:49.402 [2024-11-29 07:43:39.239163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.402 pt2 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.402 [2024-11-29 07:43:39.250210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:49.402 [2024-11-29 07:43:39.250311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.402 [2024-11-29 07:43:39.250347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:49.402 [2024-11-29 07:43:39.250374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.402 [2024-11-29 07:43:39.250779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.402 [2024-11-29 07:43:39.250842] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:49.402 [2024-11-29 07:43:39.250934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:49.402 [2024-11-29 07:43:39.250990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:49.402 pt3 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.402 [2024-11-29 07:43:39.258170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:49.402 [2024-11-29 07:43:39.258249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.402 [2024-11-29 07:43:39.258282] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:49.402 [2024-11-29 07:43:39.258309] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.402 [2024-11-29 07:43:39.258691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.402 [2024-11-29 07:43:39.258712] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:49.402 [2024-11-29 07:43:39.258774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:49.402 [2024-11-29 07:43:39.258794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:49.402 [2024-11-29 07:43:39.258925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:49.402 [2024-11-29 07:43:39.258934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:49.402 [2024-11-29 07:43:39.259174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:49.402 [2024-11-29 07:43:39.259331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:49.402 [2024-11-29 07:43:39.259344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:49.402 [2024-11-29 07:43:39.259464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.402 pt4 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.402 "name": "raid_bdev1", 00:11:49.402 "uuid": "b9fddee2-49f4-43e9-8258-a925c305fc26", 00:11:49.402 "strip_size_kb": 64, 00:11:49.402 "state": "online", 00:11:49.402 "raid_level": "concat", 00:11:49.402 "superblock": true, 00:11:49.402 "num_base_bdevs": 4, 00:11:49.402 "num_base_bdevs_discovered": 4, 00:11:49.402 "num_base_bdevs_operational": 4, 00:11:49.402 "base_bdevs_list": [ 00:11:49.402 { 00:11:49.402 "name": "pt1", 00:11:49.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.402 "is_configured": true, 00:11:49.402 "data_offset": 2048, 00:11:49.402 "data_size": 63488 00:11:49.402 }, 00:11:49.402 { 00:11:49.402 "name": "pt2", 00:11:49.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.402 "is_configured": true, 00:11:49.402 "data_offset": 2048, 00:11:49.402 "data_size": 63488 00:11:49.402 }, 00:11:49.402 { 00:11:49.402 "name": "pt3", 00:11:49.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.402 "is_configured": true, 00:11:49.402 "data_offset": 2048, 00:11:49.402 "data_size": 63488 00:11:49.402 }, 00:11:49.402 { 00:11:49.402 "name": "pt4", 00:11:49.402 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.402 "is_configured": true, 00:11:49.402 "data_offset": 2048, 00:11:49.402 "data_size": 63488 00:11:49.402 } 00:11:49.402 ] 00:11:49.402 }' 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.402 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.971 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:49.971 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:49.971 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:49.971 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:49.971 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:49.971 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:49.971 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.971 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:49.971 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.971 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.971 [2024-11-29 07:43:39.733732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.971 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.971 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:49.971 "name": "raid_bdev1", 00:11:49.971 "aliases": [ 00:11:49.971 "b9fddee2-49f4-43e9-8258-a925c305fc26" 00:11:49.971 ], 00:11:49.971 "product_name": "Raid Volume", 00:11:49.971 "block_size": 512, 00:11:49.971 "num_blocks": 253952, 00:11:49.971 "uuid": "b9fddee2-49f4-43e9-8258-a925c305fc26", 00:11:49.971 "assigned_rate_limits": { 00:11:49.971 "rw_ios_per_sec": 0, 00:11:49.971 "rw_mbytes_per_sec": 0, 00:11:49.971 "r_mbytes_per_sec": 0, 00:11:49.971 "w_mbytes_per_sec": 0 00:11:49.971 }, 00:11:49.971 "claimed": false, 00:11:49.971 "zoned": false, 00:11:49.971 "supported_io_types": { 00:11:49.971 "read": true, 00:11:49.971 "write": true, 00:11:49.971 "unmap": true, 00:11:49.971 "flush": true, 00:11:49.971 "reset": true, 00:11:49.971 "nvme_admin": false, 00:11:49.971 "nvme_io": false, 00:11:49.971 "nvme_io_md": false, 00:11:49.971 "write_zeroes": true, 00:11:49.971 "zcopy": false, 00:11:49.971 "get_zone_info": false, 00:11:49.971 "zone_management": false, 00:11:49.971 "zone_append": false, 00:11:49.971 "compare": false, 00:11:49.971 "compare_and_write": false, 00:11:49.971 "abort": false, 00:11:49.971 "seek_hole": false, 00:11:49.971 "seek_data": false, 00:11:49.971 "copy": false, 00:11:49.971 "nvme_iov_md": false 00:11:49.971 }, 00:11:49.971 "memory_domains": [ 00:11:49.971 { 00:11:49.971 "dma_device_id": "system", 00:11:49.971 "dma_device_type": 1 00:11:49.971 }, 00:11:49.971 { 00:11:49.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.971 "dma_device_type": 2 00:11:49.971 }, 00:11:49.971 { 00:11:49.971 "dma_device_id": "system", 00:11:49.971 "dma_device_type": 1 00:11:49.971 }, 00:11:49.971 { 00:11:49.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.971 "dma_device_type": 2 00:11:49.971 }, 00:11:49.971 { 00:11:49.971 "dma_device_id": "system", 00:11:49.971 "dma_device_type": 1 00:11:49.971 }, 00:11:49.971 { 00:11:49.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.971 "dma_device_type": 2 00:11:49.971 }, 00:11:49.971 { 00:11:49.971 "dma_device_id": "system", 00:11:49.971 "dma_device_type": 1 00:11:49.971 }, 00:11:49.971 { 00:11:49.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.971 "dma_device_type": 2 00:11:49.971 } 00:11:49.971 ], 00:11:49.971 "driver_specific": { 00:11:49.971 "raid": { 00:11:49.971 "uuid": "b9fddee2-49f4-43e9-8258-a925c305fc26", 00:11:49.971 "strip_size_kb": 64, 00:11:49.972 "state": "online", 00:11:49.972 "raid_level": "concat", 00:11:49.972 "superblock": true, 00:11:49.972 "num_base_bdevs": 4, 00:11:49.972 "num_base_bdevs_discovered": 4, 00:11:49.972 "num_base_bdevs_operational": 4, 00:11:49.972 "base_bdevs_list": [ 00:11:49.972 { 00:11:49.972 "name": "pt1", 00:11:49.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.972 "is_configured": true, 00:11:49.972 "data_offset": 2048, 00:11:49.972 "data_size": 63488 00:11:49.972 }, 00:11:49.972 { 00:11:49.972 "name": "pt2", 00:11:49.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.972 "is_configured": true, 00:11:49.972 "data_offset": 2048, 00:11:49.972 "data_size": 63488 00:11:49.972 }, 00:11:49.972 { 00:11:49.972 "name": "pt3", 00:11:49.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.972 "is_configured": true, 00:11:49.972 "data_offset": 2048, 00:11:49.972 "data_size": 63488 00:11:49.972 }, 00:11:49.972 { 00:11:49.972 "name": "pt4", 00:11:49.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.972 "is_configured": true, 00:11:49.972 "data_offset": 2048, 00:11:49.972 "data_size": 63488 00:11:49.972 } 00:11:49.972 ] 00:11:49.972 } 00:11:49.972 } 00:11:49.972 }' 00:11:49.972 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:49.972 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:49.972 pt2 00:11:49.972 pt3 00:11:49.972 pt4' 00:11:49.972 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.972 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:49.972 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.972 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:49.972 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.972 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.972 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.972 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.232 07:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.232 [2024-11-29 07:43:40.073132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b9fddee2-49f4-43e9-8258-a925c305fc26 '!=' b9fddee2-49f4-43e9-8258-a925c305fc26 ']' 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72394 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72394 ']' 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72394 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72394 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72394' 00:11:50.232 killing process with pid 72394 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72394 00:11:50.232 [2024-11-29 07:43:40.129717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.232 [2024-11-29 07:43:40.129798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.232 [2024-11-29 07:43:40.129873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.232 [2024-11-29 07:43:40.129883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:50.232 07:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72394 00:11:50.802 [2024-11-29 07:43:40.532894] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.741 07:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:51.741 00:11:51.741 real 0m5.485s 00:11:51.741 user 0m7.879s 00:11:51.741 sys 0m0.882s 00:11:51.741 ************************************ 00:11:51.741 END TEST raid_superblock_test 00:11:51.741 ************************************ 00:11:51.741 07:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.741 07:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.000 07:43:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:52.000 07:43:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:52.000 07:43:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.000 07:43:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:52.000 ************************************ 00:11:52.000 START TEST raid_read_error_test 00:11:52.000 ************************************ 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NBcgjxVTnk 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72656 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72656 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72656 ']' 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.000 07:43:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.000 [2024-11-29 07:43:41.832038] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:52.000 [2024-11-29 07:43:41.832193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72656 ] 00:11:52.260 [2024-11-29 07:43:42.006708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.260 [2024-11-29 07:43:42.117447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.520 [2024-11-29 07:43:42.316970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.520 [2024-11-29 07:43:42.317037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.780 BaseBdev1_malloc 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.780 true 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.780 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.780 [2024-11-29 07:43:42.723448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:52.780 [2024-11-29 07:43:42.723510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.780 [2024-11-29 07:43:42.723533] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:52.780 [2024-11-29 07:43:42.723545] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.040 [2024-11-29 07:43:42.725892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.040 [2024-11-29 07:43:42.725939] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:53.040 BaseBdev1 00:11:53.040 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.040 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.040 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:53.040 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.040 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.040 BaseBdev2_malloc 00:11:53.040 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.040 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:53.040 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.041 true 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.041 [2024-11-29 07:43:42.790972] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:53.041 [2024-11-29 07:43:42.791031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.041 [2024-11-29 07:43:42.791049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:53.041 [2024-11-29 07:43:42.791060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.041 [2024-11-29 07:43:42.793168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.041 [2024-11-29 07:43:42.793267] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:53.041 BaseBdev2 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.041 BaseBdev3_malloc 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.041 true 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.041 [2024-11-29 07:43:42.870884] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:53.041 [2024-11-29 07:43:42.870941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.041 [2024-11-29 07:43:42.870959] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:53.041 [2024-11-29 07:43:42.870969] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.041 [2024-11-29 07:43:42.873232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.041 [2024-11-29 07:43:42.873272] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:53.041 BaseBdev3 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.041 BaseBdev4_malloc 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.041 true 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.041 [2024-11-29 07:43:42.938395] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:53.041 [2024-11-29 07:43:42.938444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.041 [2024-11-29 07:43:42.938462] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:53.041 [2024-11-29 07:43:42.938472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.041 [2024-11-29 07:43:42.940558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.041 [2024-11-29 07:43:42.940603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:53.041 BaseBdev4 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.041 [2024-11-29 07:43:42.950434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.041 [2024-11-29 07:43:42.952215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.041 [2024-11-29 07:43:42.952289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.041 [2024-11-29 07:43:42.952351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:53.041 [2024-11-29 07:43:42.952568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:53.041 [2024-11-29 07:43:42.952585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:53.041 [2024-11-29 07:43:42.952821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:53.041 [2024-11-29 07:43:42.952990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:53.041 [2024-11-29 07:43:42.953001] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:53.041 [2024-11-29 07:43:42.953171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.041 07:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.302 07:43:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.302 "name": "raid_bdev1", 00:11:53.302 "uuid": "b8650d92-bee9-4db9-a6e6-b77e2b0d0048", 00:11:53.302 "strip_size_kb": 64, 00:11:53.302 "state": "online", 00:11:53.302 "raid_level": "concat", 00:11:53.302 "superblock": true, 00:11:53.302 "num_base_bdevs": 4, 00:11:53.302 "num_base_bdevs_discovered": 4, 00:11:53.302 "num_base_bdevs_operational": 4, 00:11:53.302 "base_bdevs_list": [ 00:11:53.302 { 00:11:53.302 "name": "BaseBdev1", 00:11:53.302 "uuid": "87dafe8f-8f7f-55ab-885c-1d6a1a6cb213", 00:11:53.302 "is_configured": true, 00:11:53.302 "data_offset": 2048, 00:11:53.302 "data_size": 63488 00:11:53.302 }, 00:11:53.302 { 00:11:53.302 "name": "BaseBdev2", 00:11:53.302 "uuid": "e320d70e-006d-55f1-a48d-94b4d3f5deb1", 00:11:53.302 "is_configured": true, 00:11:53.302 "data_offset": 2048, 00:11:53.302 "data_size": 63488 00:11:53.302 }, 00:11:53.302 { 00:11:53.302 "name": "BaseBdev3", 00:11:53.302 "uuid": "b03eed2d-e745-52fd-9e14-8584fe8d18b2", 00:11:53.302 "is_configured": true, 00:11:53.302 "data_offset": 2048, 00:11:53.302 "data_size": 63488 00:11:53.302 }, 00:11:53.302 { 00:11:53.302 "name": "BaseBdev4", 00:11:53.302 "uuid": "54e1bf9c-3e5b-55a8-8a2e-94dee6e10e5e", 00:11:53.302 "is_configured": true, 00:11:53.302 "data_offset": 2048, 00:11:53.302 "data_size": 63488 00:11:53.302 } 00:11:53.302 ] 00:11:53.302 }' 00:11:53.302 07:43:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.302 07:43:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.569 07:43:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:53.569 07:43:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:53.569 [2024-11-29 07:43:43.470790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.520 "name": "raid_bdev1", 00:11:54.520 "uuid": "b8650d92-bee9-4db9-a6e6-b77e2b0d0048", 00:11:54.520 "strip_size_kb": 64, 00:11:54.520 "state": "online", 00:11:54.520 "raid_level": "concat", 00:11:54.520 "superblock": true, 00:11:54.520 "num_base_bdevs": 4, 00:11:54.520 "num_base_bdevs_discovered": 4, 00:11:54.520 "num_base_bdevs_operational": 4, 00:11:54.520 "base_bdevs_list": [ 00:11:54.520 { 00:11:54.520 "name": "BaseBdev1", 00:11:54.520 "uuid": "87dafe8f-8f7f-55ab-885c-1d6a1a6cb213", 00:11:54.520 "is_configured": true, 00:11:54.520 "data_offset": 2048, 00:11:54.520 "data_size": 63488 00:11:54.520 }, 00:11:54.520 { 00:11:54.520 "name": "BaseBdev2", 00:11:54.520 "uuid": "e320d70e-006d-55f1-a48d-94b4d3f5deb1", 00:11:54.520 "is_configured": true, 00:11:54.520 "data_offset": 2048, 00:11:54.520 "data_size": 63488 00:11:54.520 }, 00:11:54.520 { 00:11:54.520 "name": "BaseBdev3", 00:11:54.520 "uuid": "b03eed2d-e745-52fd-9e14-8584fe8d18b2", 00:11:54.520 "is_configured": true, 00:11:54.520 "data_offset": 2048, 00:11:54.520 "data_size": 63488 00:11:54.520 }, 00:11:54.520 { 00:11:54.520 "name": "BaseBdev4", 00:11:54.520 "uuid": "54e1bf9c-3e5b-55a8-8a2e-94dee6e10e5e", 00:11:54.520 "is_configured": true, 00:11:54.520 "data_offset": 2048, 00:11:54.520 "data_size": 63488 00:11:54.520 } 00:11:54.520 ] 00:11:54.520 }' 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.520 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.091 [2024-11-29 07:43:44.778565] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.091 [2024-11-29 07:43:44.778678] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.091 [2024-11-29 07:43:44.781567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.091 [2024-11-29 07:43:44.781671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.091 [2024-11-29 07:43:44.781736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.091 [2024-11-29 07:43:44.781782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.091 { 00:11:55.091 "results": [ 00:11:55.091 { 00:11:55.091 "job": "raid_bdev1", 00:11:55.091 "core_mask": "0x1", 00:11:55.091 "workload": "randrw", 00:11:55.091 "percentage": 50, 00:11:55.091 "status": "finished", 00:11:55.091 "queue_depth": 1, 00:11:55.091 "io_size": 131072, 00:11:55.091 "runtime": 1.308542, 00:11:55.091 "iops": 15171.847751161216, 00:11:55.091 "mibps": 1896.480968895152, 00:11:55.091 "io_failed": 1, 00:11:55.091 "io_timeout": 0, 00:11:55.091 "avg_latency_us": 91.32503713791904, 00:11:55.091 "min_latency_us": 26.941484716157206, 00:11:55.091 "max_latency_us": 1452.380786026201 00:11:55.091 } 00:11:55.091 ], 00:11:55.091 "core_count": 1 00:11:55.091 } 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72656 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72656 ']' 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72656 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72656 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72656' 00:11:55.091 killing process with pid 72656 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72656 00:11:55.091 [2024-11-29 07:43:44.822664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.091 07:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72656 00:11:55.357 [2024-11-29 07:43:45.149582] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.745 07:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:56.745 07:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:56.745 07:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NBcgjxVTnk 00:11:56.745 07:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:11:56.745 ************************************ 00:11:56.745 END TEST raid_read_error_test 00:11:56.745 ************************************ 00:11:56.745 07:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:56.745 07:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:56.745 07:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:56.745 07:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:11:56.745 00:11:56.745 real 0m4.618s 00:11:56.745 user 0m5.397s 00:11:56.745 sys 0m0.555s 00:11:56.745 07:43:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.745 07:43:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.745 07:43:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:56.745 07:43:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:56.745 07:43:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.745 07:43:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.745 ************************************ 00:11:56.745 START TEST raid_write_error_test 00:11:56.745 ************************************ 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.i89HbNavIz 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72804 00:11:56.745 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:56.746 07:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72804 00:11:56.746 07:43:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72804 ']' 00:11:56.746 07:43:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.746 07:43:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.746 07:43:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.746 07:43:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.746 07:43:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.746 [2024-11-29 07:43:46.525741] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:56.746 [2024-11-29 07:43:46.525948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72804 ] 00:11:57.006 [2024-11-29 07:43:46.700599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.006 [2024-11-29 07:43:46.814119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.264 [2024-11-29 07:43:47.017899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.264 [2024-11-29 07:43:47.017935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.524 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.524 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:57.524 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.525 BaseBdev1_malloc 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.525 true 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.525 [2024-11-29 07:43:47.416454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:57.525 [2024-11-29 07:43:47.416513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.525 [2024-11-29 07:43:47.416548] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:57.525 [2024-11-29 07:43:47.416560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.525 [2024-11-29 07:43:47.418746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.525 [2024-11-29 07:43:47.418821] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.525 BaseBdev1 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.525 BaseBdev2_malloc 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.525 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.786 true 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.786 [2024-11-29 07:43:47.482135] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:57.786 [2024-11-29 07:43:47.482186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.786 [2024-11-29 07:43:47.482217] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:57.786 [2024-11-29 07:43:47.482227] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.786 [2024-11-29 07:43:47.484252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.786 [2024-11-29 07:43:47.484302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:57.786 BaseBdev2 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.786 BaseBdev3_malloc 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.786 true 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.786 [2024-11-29 07:43:47.561756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:57.786 [2024-11-29 07:43:47.561807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.786 [2024-11-29 07:43:47.561823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:57.786 [2024-11-29 07:43:47.561833] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.786 [2024-11-29 07:43:47.563832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.786 [2024-11-29 07:43:47.563923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:57.786 BaseBdev3 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.786 BaseBdev4_malloc 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.786 true 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.786 [2024-11-29 07:43:47.626197] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:57.786 [2024-11-29 07:43:47.626246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.786 [2024-11-29 07:43:47.626263] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:57.786 [2024-11-29 07:43:47.626272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.786 [2024-11-29 07:43:47.628255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.786 [2024-11-29 07:43:47.628293] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:57.786 BaseBdev4 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.786 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.786 [2024-11-29 07:43:47.638225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.786 [2024-11-29 07:43:47.639959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.787 [2024-11-29 07:43:47.640037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.787 [2024-11-29 07:43:47.640096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:57.787 [2024-11-29 07:43:47.640329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:57.787 [2024-11-29 07:43:47.640347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:57.787 [2024-11-29 07:43:47.640593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:57.787 [2024-11-29 07:43:47.640738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:57.787 [2024-11-29 07:43:47.640748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:57.787 [2024-11-29 07:43:47.640880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.787 "name": "raid_bdev1", 00:11:57.787 "uuid": "16371ffc-1383-490d-8ccf-2df6973a34f7", 00:11:57.787 "strip_size_kb": 64, 00:11:57.787 "state": "online", 00:11:57.787 "raid_level": "concat", 00:11:57.787 "superblock": true, 00:11:57.787 "num_base_bdevs": 4, 00:11:57.787 "num_base_bdevs_discovered": 4, 00:11:57.787 "num_base_bdevs_operational": 4, 00:11:57.787 "base_bdevs_list": [ 00:11:57.787 { 00:11:57.787 "name": "BaseBdev1", 00:11:57.787 "uuid": "7c18e96c-7b31-5aed-b311-7da140a9c902", 00:11:57.787 "is_configured": true, 00:11:57.787 "data_offset": 2048, 00:11:57.787 "data_size": 63488 00:11:57.787 }, 00:11:57.787 { 00:11:57.787 "name": "BaseBdev2", 00:11:57.787 "uuid": "27d8dea0-59f6-5254-8062-9a8462bd4b26", 00:11:57.787 "is_configured": true, 00:11:57.787 "data_offset": 2048, 00:11:57.787 "data_size": 63488 00:11:57.787 }, 00:11:57.787 { 00:11:57.787 "name": "BaseBdev3", 00:11:57.787 "uuid": "f7f5398e-7567-59b3-ac5a-a135580f3f3f", 00:11:57.787 "is_configured": true, 00:11:57.787 "data_offset": 2048, 00:11:57.787 "data_size": 63488 00:11:57.787 }, 00:11:57.787 { 00:11:57.787 "name": "BaseBdev4", 00:11:57.787 "uuid": "4e3a5159-0e1e-5804-9866-5ac57f8e03ac", 00:11:57.787 "is_configured": true, 00:11:57.787 "data_offset": 2048, 00:11:57.787 "data_size": 63488 00:11:57.787 } 00:11:57.787 ] 00:11:57.787 }' 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.787 07:43:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.357 07:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:58.357 07:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:58.357 [2024-11-29 07:43:48.130739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.296 "name": "raid_bdev1", 00:11:59.296 "uuid": "16371ffc-1383-490d-8ccf-2df6973a34f7", 00:11:59.296 "strip_size_kb": 64, 00:11:59.296 "state": "online", 00:11:59.296 "raid_level": "concat", 00:11:59.296 "superblock": true, 00:11:59.296 "num_base_bdevs": 4, 00:11:59.296 "num_base_bdevs_discovered": 4, 00:11:59.296 "num_base_bdevs_operational": 4, 00:11:59.296 "base_bdevs_list": [ 00:11:59.296 { 00:11:59.296 "name": "BaseBdev1", 00:11:59.296 "uuid": "7c18e96c-7b31-5aed-b311-7da140a9c902", 00:11:59.296 "is_configured": true, 00:11:59.296 "data_offset": 2048, 00:11:59.296 "data_size": 63488 00:11:59.296 }, 00:11:59.296 { 00:11:59.296 "name": "BaseBdev2", 00:11:59.296 "uuid": "27d8dea0-59f6-5254-8062-9a8462bd4b26", 00:11:59.296 "is_configured": true, 00:11:59.296 "data_offset": 2048, 00:11:59.296 "data_size": 63488 00:11:59.296 }, 00:11:59.296 { 00:11:59.296 "name": "BaseBdev3", 00:11:59.296 "uuid": "f7f5398e-7567-59b3-ac5a-a135580f3f3f", 00:11:59.296 "is_configured": true, 00:11:59.296 "data_offset": 2048, 00:11:59.296 "data_size": 63488 00:11:59.296 }, 00:11:59.296 { 00:11:59.296 "name": "BaseBdev4", 00:11:59.296 "uuid": "4e3a5159-0e1e-5804-9866-5ac57f8e03ac", 00:11:59.296 "is_configured": true, 00:11:59.296 "data_offset": 2048, 00:11:59.296 "data_size": 63488 00:11:59.296 } 00:11:59.296 ] 00:11:59.296 }' 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.296 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.866 [2024-11-29 07:43:49.529341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:59.866 [2024-11-29 07:43:49.529448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.866 [2024-11-29 07:43:49.532265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.866 [2024-11-29 07:43:49.532382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.866 [2024-11-29 07:43:49.532448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.866 [2024-11-29 07:43:49.532496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:59.866 { 00:11:59.866 "results": [ 00:11:59.866 { 00:11:59.866 "job": "raid_bdev1", 00:11:59.866 "core_mask": "0x1", 00:11:59.866 "workload": "randrw", 00:11:59.866 "percentage": 50, 00:11:59.866 "status": "finished", 00:11:59.866 "queue_depth": 1, 00:11:59.866 "io_size": 131072, 00:11:59.866 "runtime": 1.399644, 00:11:59.866 "iops": 15576.103637782178, 00:11:59.866 "mibps": 1947.0129547227723, 00:11:59.866 "io_failed": 1, 00:11:59.866 "io_timeout": 0, 00:11:59.866 "avg_latency_us": 88.97934334777187, 00:11:59.866 "min_latency_us": 25.041048034934498, 00:11:59.866 "max_latency_us": 1459.5353711790392 00:11:59.866 } 00:11:59.866 ], 00:11:59.866 "core_count": 1 00:11:59.866 } 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72804 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72804 ']' 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72804 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72804 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72804' 00:11:59.866 killing process with pid 72804 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72804 00:11:59.866 [2024-11-29 07:43:49.565936] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.866 07:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72804 00:12:00.126 [2024-11-29 07:43:49.882721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.243 07:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.i89HbNavIz 00:12:01.243 07:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:01.243 07:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:01.243 07:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:01.243 07:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:01.243 07:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:01.243 07:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:01.243 07:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:01.243 00:12:01.243 real 0m4.635s 00:12:01.243 user 0m5.444s 00:12:01.243 sys 0m0.584s 00:12:01.243 07:43:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.243 ************************************ 00:12:01.243 END TEST raid_write_error_test 00:12:01.243 ************************************ 00:12:01.243 07:43:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.243 07:43:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:01.243 07:43:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:01.243 07:43:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:01.243 07:43:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.243 07:43:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.243 ************************************ 00:12:01.243 START TEST raid_state_function_test 00:12:01.243 ************************************ 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:01.243 Process raid pid: 72950 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72950 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72950' 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72950 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72950 ']' 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.243 07:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.503 [2024-11-29 07:43:51.219874] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:01.503 [2024-11-29 07:43:51.220075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.503 [2024-11-29 07:43:51.392061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.762 [2024-11-29 07:43:51.502940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.762 [2024-11-29 07:43:51.699953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.762 [2024-11-29 07:43:51.699993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.330 [2024-11-29 07:43:52.048896] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.330 [2024-11-29 07:43:52.048960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.330 [2024-11-29 07:43:52.048971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.330 [2024-11-29 07:43:52.048981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.330 [2024-11-29 07:43:52.048988] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.330 [2024-11-29 07:43:52.048997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.330 [2024-11-29 07:43:52.049008] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.330 [2024-11-29 07:43:52.049017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.330 "name": "Existed_Raid", 00:12:02.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.330 "strip_size_kb": 0, 00:12:02.330 "state": "configuring", 00:12:02.330 "raid_level": "raid1", 00:12:02.330 "superblock": false, 00:12:02.330 "num_base_bdevs": 4, 00:12:02.330 "num_base_bdevs_discovered": 0, 00:12:02.330 "num_base_bdevs_operational": 4, 00:12:02.330 "base_bdevs_list": [ 00:12:02.330 { 00:12:02.330 "name": "BaseBdev1", 00:12:02.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.330 "is_configured": false, 00:12:02.330 "data_offset": 0, 00:12:02.330 "data_size": 0 00:12:02.330 }, 00:12:02.330 { 00:12:02.330 "name": "BaseBdev2", 00:12:02.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.330 "is_configured": false, 00:12:02.330 "data_offset": 0, 00:12:02.330 "data_size": 0 00:12:02.330 }, 00:12:02.330 { 00:12:02.330 "name": "BaseBdev3", 00:12:02.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.330 "is_configured": false, 00:12:02.330 "data_offset": 0, 00:12:02.330 "data_size": 0 00:12:02.330 }, 00:12:02.330 { 00:12:02.330 "name": "BaseBdev4", 00:12:02.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.330 "is_configured": false, 00:12:02.330 "data_offset": 0, 00:12:02.330 "data_size": 0 00:12:02.330 } 00:12:02.330 ] 00:12:02.330 }' 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.330 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.589 [2024-11-29 07:43:52.460175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.589 [2024-11-29 07:43:52.460276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.589 [2024-11-29 07:43:52.472146] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.589 [2024-11-29 07:43:52.472224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.589 [2024-11-29 07:43:52.472252] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.589 [2024-11-29 07:43:52.472276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.589 [2024-11-29 07:43:52.472295] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.589 [2024-11-29 07:43:52.472317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.589 [2024-11-29 07:43:52.472335] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.589 [2024-11-29 07:43:52.472356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.589 [2024-11-29 07:43:52.518915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.589 BaseBdev1 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.589 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.848 [ 00:12:02.848 { 00:12:02.848 "name": "BaseBdev1", 00:12:02.848 "aliases": [ 00:12:02.848 "5449b54b-1581-4e2e-bed3-07730ad3dc3a" 00:12:02.848 ], 00:12:02.848 "product_name": "Malloc disk", 00:12:02.848 "block_size": 512, 00:12:02.848 "num_blocks": 65536, 00:12:02.848 "uuid": "5449b54b-1581-4e2e-bed3-07730ad3dc3a", 00:12:02.848 "assigned_rate_limits": { 00:12:02.848 "rw_ios_per_sec": 0, 00:12:02.848 "rw_mbytes_per_sec": 0, 00:12:02.848 "r_mbytes_per_sec": 0, 00:12:02.848 "w_mbytes_per_sec": 0 00:12:02.848 }, 00:12:02.848 "claimed": true, 00:12:02.848 "claim_type": "exclusive_write", 00:12:02.848 "zoned": false, 00:12:02.848 "supported_io_types": { 00:12:02.848 "read": true, 00:12:02.848 "write": true, 00:12:02.848 "unmap": true, 00:12:02.848 "flush": true, 00:12:02.848 "reset": true, 00:12:02.848 "nvme_admin": false, 00:12:02.848 "nvme_io": false, 00:12:02.848 "nvme_io_md": false, 00:12:02.848 "write_zeroes": true, 00:12:02.848 "zcopy": true, 00:12:02.848 "get_zone_info": false, 00:12:02.848 "zone_management": false, 00:12:02.848 "zone_append": false, 00:12:02.848 "compare": false, 00:12:02.848 "compare_and_write": false, 00:12:02.848 "abort": true, 00:12:02.848 "seek_hole": false, 00:12:02.848 "seek_data": false, 00:12:02.848 "copy": true, 00:12:02.848 "nvme_iov_md": false 00:12:02.848 }, 00:12:02.848 "memory_domains": [ 00:12:02.848 { 00:12:02.848 "dma_device_id": "system", 00:12:02.848 "dma_device_type": 1 00:12:02.848 }, 00:12:02.848 { 00:12:02.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.848 "dma_device_type": 2 00:12:02.848 } 00:12:02.848 ], 00:12:02.848 "driver_specific": {} 00:12:02.848 } 00:12:02.848 ] 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.848 "name": "Existed_Raid", 00:12:02.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.848 "strip_size_kb": 0, 00:12:02.848 "state": "configuring", 00:12:02.848 "raid_level": "raid1", 00:12:02.848 "superblock": false, 00:12:02.848 "num_base_bdevs": 4, 00:12:02.848 "num_base_bdevs_discovered": 1, 00:12:02.848 "num_base_bdevs_operational": 4, 00:12:02.848 "base_bdevs_list": [ 00:12:02.848 { 00:12:02.848 "name": "BaseBdev1", 00:12:02.848 "uuid": "5449b54b-1581-4e2e-bed3-07730ad3dc3a", 00:12:02.848 "is_configured": true, 00:12:02.848 "data_offset": 0, 00:12:02.848 "data_size": 65536 00:12:02.848 }, 00:12:02.848 { 00:12:02.848 "name": "BaseBdev2", 00:12:02.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.848 "is_configured": false, 00:12:02.848 "data_offset": 0, 00:12:02.848 "data_size": 0 00:12:02.848 }, 00:12:02.848 { 00:12:02.848 "name": "BaseBdev3", 00:12:02.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.848 "is_configured": false, 00:12:02.848 "data_offset": 0, 00:12:02.848 "data_size": 0 00:12:02.848 }, 00:12:02.848 { 00:12:02.848 "name": "BaseBdev4", 00:12:02.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.848 "is_configured": false, 00:12:02.848 "data_offset": 0, 00:12:02.848 "data_size": 0 00:12:02.848 } 00:12:02.848 ] 00:12:02.848 }' 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.848 07:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.107 [2024-11-29 07:43:53.006125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.107 [2024-11-29 07:43:53.006180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.107 [2024-11-29 07:43:53.018154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.107 [2024-11-29 07:43:53.019996] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.107 [2024-11-29 07:43:53.020078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.107 [2024-11-29 07:43:53.020122] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.107 [2024-11-29 07:43:53.020148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.107 [2024-11-29 07:43:53.020184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:03.107 [2024-11-29 07:43:53.020205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.107 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.366 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.366 "name": "Existed_Raid", 00:12:03.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.366 "strip_size_kb": 0, 00:12:03.366 "state": "configuring", 00:12:03.366 "raid_level": "raid1", 00:12:03.366 "superblock": false, 00:12:03.366 "num_base_bdevs": 4, 00:12:03.366 "num_base_bdevs_discovered": 1, 00:12:03.366 "num_base_bdevs_operational": 4, 00:12:03.366 "base_bdevs_list": [ 00:12:03.366 { 00:12:03.366 "name": "BaseBdev1", 00:12:03.366 "uuid": "5449b54b-1581-4e2e-bed3-07730ad3dc3a", 00:12:03.366 "is_configured": true, 00:12:03.366 "data_offset": 0, 00:12:03.366 "data_size": 65536 00:12:03.366 }, 00:12:03.366 { 00:12:03.366 "name": "BaseBdev2", 00:12:03.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.366 "is_configured": false, 00:12:03.366 "data_offset": 0, 00:12:03.366 "data_size": 0 00:12:03.366 }, 00:12:03.366 { 00:12:03.366 "name": "BaseBdev3", 00:12:03.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.366 "is_configured": false, 00:12:03.366 "data_offset": 0, 00:12:03.366 "data_size": 0 00:12:03.366 }, 00:12:03.366 { 00:12:03.366 "name": "BaseBdev4", 00:12:03.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.366 "is_configured": false, 00:12:03.366 "data_offset": 0, 00:12:03.366 "data_size": 0 00:12:03.366 } 00:12:03.366 ] 00:12:03.366 }' 00:12:03.366 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.366 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.626 [2024-11-29 07:43:53.513482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.626 BaseBdev2 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.626 [ 00:12:03.626 { 00:12:03.626 "name": "BaseBdev2", 00:12:03.626 "aliases": [ 00:12:03.626 "2f830f5f-8f3d-4d52-b4e1-3a3154124be8" 00:12:03.626 ], 00:12:03.626 "product_name": "Malloc disk", 00:12:03.626 "block_size": 512, 00:12:03.626 "num_blocks": 65536, 00:12:03.626 "uuid": "2f830f5f-8f3d-4d52-b4e1-3a3154124be8", 00:12:03.626 "assigned_rate_limits": { 00:12:03.626 "rw_ios_per_sec": 0, 00:12:03.626 "rw_mbytes_per_sec": 0, 00:12:03.626 "r_mbytes_per_sec": 0, 00:12:03.626 "w_mbytes_per_sec": 0 00:12:03.626 }, 00:12:03.626 "claimed": true, 00:12:03.626 "claim_type": "exclusive_write", 00:12:03.626 "zoned": false, 00:12:03.626 "supported_io_types": { 00:12:03.626 "read": true, 00:12:03.626 "write": true, 00:12:03.626 "unmap": true, 00:12:03.626 "flush": true, 00:12:03.626 "reset": true, 00:12:03.626 "nvme_admin": false, 00:12:03.626 "nvme_io": false, 00:12:03.626 "nvme_io_md": false, 00:12:03.626 "write_zeroes": true, 00:12:03.626 "zcopy": true, 00:12:03.626 "get_zone_info": false, 00:12:03.626 "zone_management": false, 00:12:03.626 "zone_append": false, 00:12:03.626 "compare": false, 00:12:03.626 "compare_and_write": false, 00:12:03.626 "abort": true, 00:12:03.626 "seek_hole": false, 00:12:03.626 "seek_data": false, 00:12:03.626 "copy": true, 00:12:03.626 "nvme_iov_md": false 00:12:03.626 }, 00:12:03.626 "memory_domains": [ 00:12:03.626 { 00:12:03.626 "dma_device_id": "system", 00:12:03.626 "dma_device_type": 1 00:12:03.626 }, 00:12:03.626 { 00:12:03.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.626 "dma_device_type": 2 00:12:03.626 } 00:12:03.626 ], 00:12:03.626 "driver_specific": {} 00:12:03.626 } 00:12:03.626 ] 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.626 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.886 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.886 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.886 "name": "Existed_Raid", 00:12:03.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.886 "strip_size_kb": 0, 00:12:03.886 "state": "configuring", 00:12:03.886 "raid_level": "raid1", 00:12:03.886 "superblock": false, 00:12:03.886 "num_base_bdevs": 4, 00:12:03.886 "num_base_bdevs_discovered": 2, 00:12:03.886 "num_base_bdevs_operational": 4, 00:12:03.886 "base_bdevs_list": [ 00:12:03.886 { 00:12:03.886 "name": "BaseBdev1", 00:12:03.886 "uuid": "5449b54b-1581-4e2e-bed3-07730ad3dc3a", 00:12:03.886 "is_configured": true, 00:12:03.886 "data_offset": 0, 00:12:03.886 "data_size": 65536 00:12:03.886 }, 00:12:03.886 { 00:12:03.886 "name": "BaseBdev2", 00:12:03.886 "uuid": "2f830f5f-8f3d-4d52-b4e1-3a3154124be8", 00:12:03.886 "is_configured": true, 00:12:03.886 "data_offset": 0, 00:12:03.886 "data_size": 65536 00:12:03.886 }, 00:12:03.886 { 00:12:03.886 "name": "BaseBdev3", 00:12:03.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.886 "is_configured": false, 00:12:03.886 "data_offset": 0, 00:12:03.886 "data_size": 0 00:12:03.886 }, 00:12:03.886 { 00:12:03.886 "name": "BaseBdev4", 00:12:03.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.886 "is_configured": false, 00:12:03.886 "data_offset": 0, 00:12:03.886 "data_size": 0 00:12:03.886 } 00:12:03.886 ] 00:12:03.886 }' 00:12:03.886 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.886 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.146 07:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:04.146 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.146 07:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.146 [2024-11-29 07:43:54.025312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.146 BaseBdev3 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.146 [ 00:12:04.146 { 00:12:04.146 "name": "BaseBdev3", 00:12:04.146 "aliases": [ 00:12:04.146 "4f959e67-d718-4f20-89e4-2053f968777c" 00:12:04.146 ], 00:12:04.146 "product_name": "Malloc disk", 00:12:04.146 "block_size": 512, 00:12:04.146 "num_blocks": 65536, 00:12:04.146 "uuid": "4f959e67-d718-4f20-89e4-2053f968777c", 00:12:04.146 "assigned_rate_limits": { 00:12:04.146 "rw_ios_per_sec": 0, 00:12:04.146 "rw_mbytes_per_sec": 0, 00:12:04.146 "r_mbytes_per_sec": 0, 00:12:04.146 "w_mbytes_per_sec": 0 00:12:04.146 }, 00:12:04.146 "claimed": true, 00:12:04.146 "claim_type": "exclusive_write", 00:12:04.146 "zoned": false, 00:12:04.146 "supported_io_types": { 00:12:04.146 "read": true, 00:12:04.146 "write": true, 00:12:04.146 "unmap": true, 00:12:04.146 "flush": true, 00:12:04.146 "reset": true, 00:12:04.146 "nvme_admin": false, 00:12:04.146 "nvme_io": false, 00:12:04.146 "nvme_io_md": false, 00:12:04.146 "write_zeroes": true, 00:12:04.146 "zcopy": true, 00:12:04.146 "get_zone_info": false, 00:12:04.146 "zone_management": false, 00:12:04.146 "zone_append": false, 00:12:04.146 "compare": false, 00:12:04.146 "compare_and_write": false, 00:12:04.146 "abort": true, 00:12:04.146 "seek_hole": false, 00:12:04.146 "seek_data": false, 00:12:04.146 "copy": true, 00:12:04.146 "nvme_iov_md": false 00:12:04.146 }, 00:12:04.146 "memory_domains": [ 00:12:04.146 { 00:12:04.146 "dma_device_id": "system", 00:12:04.146 "dma_device_type": 1 00:12:04.146 }, 00:12:04.146 { 00:12:04.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.146 "dma_device_type": 2 00:12:04.146 } 00:12:04.146 ], 00:12:04.146 "driver_specific": {} 00:12:04.146 } 00:12:04.146 ] 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.146 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.406 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.406 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.406 "name": "Existed_Raid", 00:12:04.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.406 "strip_size_kb": 0, 00:12:04.406 "state": "configuring", 00:12:04.406 "raid_level": "raid1", 00:12:04.406 "superblock": false, 00:12:04.406 "num_base_bdevs": 4, 00:12:04.407 "num_base_bdevs_discovered": 3, 00:12:04.407 "num_base_bdevs_operational": 4, 00:12:04.407 "base_bdevs_list": [ 00:12:04.407 { 00:12:04.407 "name": "BaseBdev1", 00:12:04.407 "uuid": "5449b54b-1581-4e2e-bed3-07730ad3dc3a", 00:12:04.407 "is_configured": true, 00:12:04.407 "data_offset": 0, 00:12:04.407 "data_size": 65536 00:12:04.407 }, 00:12:04.407 { 00:12:04.407 "name": "BaseBdev2", 00:12:04.407 "uuid": "2f830f5f-8f3d-4d52-b4e1-3a3154124be8", 00:12:04.407 "is_configured": true, 00:12:04.407 "data_offset": 0, 00:12:04.407 "data_size": 65536 00:12:04.407 }, 00:12:04.407 { 00:12:04.407 "name": "BaseBdev3", 00:12:04.407 "uuid": "4f959e67-d718-4f20-89e4-2053f968777c", 00:12:04.407 "is_configured": true, 00:12:04.407 "data_offset": 0, 00:12:04.407 "data_size": 65536 00:12:04.407 }, 00:12:04.407 { 00:12:04.407 "name": "BaseBdev4", 00:12:04.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.407 "is_configured": false, 00:12:04.407 "data_offset": 0, 00:12:04.407 "data_size": 0 00:12:04.407 } 00:12:04.407 ] 00:12:04.407 }' 00:12:04.407 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.407 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.667 [2024-11-29 07:43:54.492772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:04.667 [2024-11-29 07:43:54.492826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:04.667 [2024-11-29 07:43:54.492835] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:04.667 [2024-11-29 07:43:54.493095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:04.667 [2024-11-29 07:43:54.493347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:04.667 [2024-11-29 07:43:54.493362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:04.667 [2024-11-29 07:43:54.493621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.667 BaseBdev4 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.667 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.667 [ 00:12:04.667 { 00:12:04.667 "name": "BaseBdev4", 00:12:04.667 "aliases": [ 00:12:04.667 "35c90a5f-ac4a-4c1a-991a-dee9c7c93ac0" 00:12:04.667 ], 00:12:04.667 "product_name": "Malloc disk", 00:12:04.667 "block_size": 512, 00:12:04.667 "num_blocks": 65536, 00:12:04.667 "uuid": "35c90a5f-ac4a-4c1a-991a-dee9c7c93ac0", 00:12:04.667 "assigned_rate_limits": { 00:12:04.667 "rw_ios_per_sec": 0, 00:12:04.667 "rw_mbytes_per_sec": 0, 00:12:04.667 "r_mbytes_per_sec": 0, 00:12:04.667 "w_mbytes_per_sec": 0 00:12:04.667 }, 00:12:04.667 "claimed": true, 00:12:04.667 "claim_type": "exclusive_write", 00:12:04.667 "zoned": false, 00:12:04.667 "supported_io_types": { 00:12:04.667 "read": true, 00:12:04.667 "write": true, 00:12:04.667 "unmap": true, 00:12:04.667 "flush": true, 00:12:04.667 "reset": true, 00:12:04.667 "nvme_admin": false, 00:12:04.667 "nvme_io": false, 00:12:04.667 "nvme_io_md": false, 00:12:04.667 "write_zeroes": true, 00:12:04.667 "zcopy": true, 00:12:04.667 "get_zone_info": false, 00:12:04.667 "zone_management": false, 00:12:04.667 "zone_append": false, 00:12:04.667 "compare": false, 00:12:04.667 "compare_and_write": false, 00:12:04.667 "abort": true, 00:12:04.667 "seek_hole": false, 00:12:04.667 "seek_data": false, 00:12:04.667 "copy": true, 00:12:04.668 "nvme_iov_md": false 00:12:04.668 }, 00:12:04.668 "memory_domains": [ 00:12:04.668 { 00:12:04.668 "dma_device_id": "system", 00:12:04.668 "dma_device_type": 1 00:12:04.668 }, 00:12:04.668 { 00:12:04.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.668 "dma_device_type": 2 00:12:04.668 } 00:12:04.668 ], 00:12:04.668 "driver_specific": {} 00:12:04.668 } 00:12:04.668 ] 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.668 "name": "Existed_Raid", 00:12:04.668 "uuid": "253b56ae-6d7c-42a4-a94c-025ed322e640", 00:12:04.668 "strip_size_kb": 0, 00:12:04.668 "state": "online", 00:12:04.668 "raid_level": "raid1", 00:12:04.668 "superblock": false, 00:12:04.668 "num_base_bdevs": 4, 00:12:04.668 "num_base_bdevs_discovered": 4, 00:12:04.668 "num_base_bdevs_operational": 4, 00:12:04.668 "base_bdevs_list": [ 00:12:04.668 { 00:12:04.668 "name": "BaseBdev1", 00:12:04.668 "uuid": "5449b54b-1581-4e2e-bed3-07730ad3dc3a", 00:12:04.668 "is_configured": true, 00:12:04.668 "data_offset": 0, 00:12:04.668 "data_size": 65536 00:12:04.668 }, 00:12:04.668 { 00:12:04.668 "name": "BaseBdev2", 00:12:04.668 "uuid": "2f830f5f-8f3d-4d52-b4e1-3a3154124be8", 00:12:04.668 "is_configured": true, 00:12:04.668 "data_offset": 0, 00:12:04.668 "data_size": 65536 00:12:04.668 }, 00:12:04.668 { 00:12:04.668 "name": "BaseBdev3", 00:12:04.668 "uuid": "4f959e67-d718-4f20-89e4-2053f968777c", 00:12:04.668 "is_configured": true, 00:12:04.668 "data_offset": 0, 00:12:04.668 "data_size": 65536 00:12:04.668 }, 00:12:04.668 { 00:12:04.668 "name": "BaseBdev4", 00:12:04.668 "uuid": "35c90a5f-ac4a-4c1a-991a-dee9c7c93ac0", 00:12:04.668 "is_configured": true, 00:12:04.668 "data_offset": 0, 00:12:04.668 "data_size": 65536 00:12:04.668 } 00:12:04.668 ] 00:12:04.668 }' 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.668 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.239 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.239 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.239 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.239 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.239 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.239 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.239 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.240 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.240 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.240 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.240 [2024-11-29 07:43:54.972385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.240 07:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.240 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.240 "name": "Existed_Raid", 00:12:05.240 "aliases": [ 00:12:05.240 "253b56ae-6d7c-42a4-a94c-025ed322e640" 00:12:05.240 ], 00:12:05.240 "product_name": "Raid Volume", 00:12:05.240 "block_size": 512, 00:12:05.240 "num_blocks": 65536, 00:12:05.240 "uuid": "253b56ae-6d7c-42a4-a94c-025ed322e640", 00:12:05.240 "assigned_rate_limits": { 00:12:05.240 "rw_ios_per_sec": 0, 00:12:05.240 "rw_mbytes_per_sec": 0, 00:12:05.240 "r_mbytes_per_sec": 0, 00:12:05.240 "w_mbytes_per_sec": 0 00:12:05.240 }, 00:12:05.240 "claimed": false, 00:12:05.240 "zoned": false, 00:12:05.240 "supported_io_types": { 00:12:05.240 "read": true, 00:12:05.240 "write": true, 00:12:05.240 "unmap": false, 00:12:05.240 "flush": false, 00:12:05.240 "reset": true, 00:12:05.240 "nvme_admin": false, 00:12:05.240 "nvme_io": false, 00:12:05.240 "nvme_io_md": false, 00:12:05.240 "write_zeroes": true, 00:12:05.240 "zcopy": false, 00:12:05.240 "get_zone_info": false, 00:12:05.240 "zone_management": false, 00:12:05.240 "zone_append": false, 00:12:05.240 "compare": false, 00:12:05.240 "compare_and_write": false, 00:12:05.240 "abort": false, 00:12:05.241 "seek_hole": false, 00:12:05.241 "seek_data": false, 00:12:05.241 "copy": false, 00:12:05.241 "nvme_iov_md": false 00:12:05.241 }, 00:12:05.241 "memory_domains": [ 00:12:05.241 { 00:12:05.241 "dma_device_id": "system", 00:12:05.241 "dma_device_type": 1 00:12:05.241 }, 00:12:05.241 { 00:12:05.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.241 "dma_device_type": 2 00:12:05.241 }, 00:12:05.241 { 00:12:05.241 "dma_device_id": "system", 00:12:05.241 "dma_device_type": 1 00:12:05.241 }, 00:12:05.241 { 00:12:05.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.241 "dma_device_type": 2 00:12:05.241 }, 00:12:05.241 { 00:12:05.241 "dma_device_id": "system", 00:12:05.241 "dma_device_type": 1 00:12:05.241 }, 00:12:05.241 { 00:12:05.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.241 "dma_device_type": 2 00:12:05.241 }, 00:12:05.241 { 00:12:05.241 "dma_device_id": "system", 00:12:05.241 "dma_device_type": 1 00:12:05.241 }, 00:12:05.241 { 00:12:05.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.241 "dma_device_type": 2 00:12:05.241 } 00:12:05.241 ], 00:12:05.241 "driver_specific": { 00:12:05.241 "raid": { 00:12:05.241 "uuid": "253b56ae-6d7c-42a4-a94c-025ed322e640", 00:12:05.241 "strip_size_kb": 0, 00:12:05.241 "state": "online", 00:12:05.241 "raid_level": "raid1", 00:12:05.241 "superblock": false, 00:12:05.241 "num_base_bdevs": 4, 00:12:05.241 "num_base_bdevs_discovered": 4, 00:12:05.241 "num_base_bdevs_operational": 4, 00:12:05.241 "base_bdevs_list": [ 00:12:05.241 { 00:12:05.241 "name": "BaseBdev1", 00:12:05.241 "uuid": "5449b54b-1581-4e2e-bed3-07730ad3dc3a", 00:12:05.241 "is_configured": true, 00:12:05.241 "data_offset": 0, 00:12:05.241 "data_size": 65536 00:12:05.241 }, 00:12:05.241 { 00:12:05.241 "name": "BaseBdev2", 00:12:05.241 "uuid": "2f830f5f-8f3d-4d52-b4e1-3a3154124be8", 00:12:05.241 "is_configured": true, 00:12:05.241 "data_offset": 0, 00:12:05.241 "data_size": 65536 00:12:05.241 }, 00:12:05.241 { 00:12:05.241 "name": "BaseBdev3", 00:12:05.241 "uuid": "4f959e67-d718-4f20-89e4-2053f968777c", 00:12:05.241 "is_configured": true, 00:12:05.241 "data_offset": 0, 00:12:05.241 "data_size": 65536 00:12:05.241 }, 00:12:05.241 { 00:12:05.241 "name": "BaseBdev4", 00:12:05.241 "uuid": "35c90a5f-ac4a-4c1a-991a-dee9c7c93ac0", 00:12:05.241 "is_configured": true, 00:12:05.241 "data_offset": 0, 00:12:05.241 "data_size": 65536 00:12:05.241 } 00:12:05.241 ] 00:12:05.241 } 00:12:05.241 } 00:12:05.241 }' 00:12:05.241 07:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.241 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:05.241 BaseBdev2 00:12:05.241 BaseBdev3 00:12:05.241 BaseBdev4' 00:12:05.241 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.241 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.241 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.242 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.504 [2024-11-29 07:43:55.275614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.504 "name": "Existed_Raid", 00:12:05.504 "uuid": "253b56ae-6d7c-42a4-a94c-025ed322e640", 00:12:05.504 "strip_size_kb": 0, 00:12:05.504 "state": "online", 00:12:05.504 "raid_level": "raid1", 00:12:05.504 "superblock": false, 00:12:05.504 "num_base_bdevs": 4, 00:12:05.504 "num_base_bdevs_discovered": 3, 00:12:05.504 "num_base_bdevs_operational": 3, 00:12:05.504 "base_bdevs_list": [ 00:12:05.504 { 00:12:05.504 "name": null, 00:12:05.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.504 "is_configured": false, 00:12:05.504 "data_offset": 0, 00:12:05.504 "data_size": 65536 00:12:05.504 }, 00:12:05.504 { 00:12:05.504 "name": "BaseBdev2", 00:12:05.504 "uuid": "2f830f5f-8f3d-4d52-b4e1-3a3154124be8", 00:12:05.504 "is_configured": true, 00:12:05.504 "data_offset": 0, 00:12:05.504 "data_size": 65536 00:12:05.504 }, 00:12:05.504 { 00:12:05.504 "name": "BaseBdev3", 00:12:05.504 "uuid": "4f959e67-d718-4f20-89e4-2053f968777c", 00:12:05.504 "is_configured": true, 00:12:05.504 "data_offset": 0, 00:12:05.504 "data_size": 65536 00:12:05.504 }, 00:12:05.504 { 00:12:05.504 "name": "BaseBdev4", 00:12:05.504 "uuid": "35c90a5f-ac4a-4c1a-991a-dee9c7c93ac0", 00:12:05.504 "is_configured": true, 00:12:05.504 "data_offset": 0, 00:12:05.504 "data_size": 65536 00:12:05.504 } 00:12:05.504 ] 00:12:05.504 }' 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.504 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.074 [2024-11-29 07:43:55.843983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.074 07:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.074 [2024-11-29 07:43:55.999350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.334 [2024-11-29 07:43:56.152707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:06.334 [2024-11-29 07:43:56.152809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.334 [2024-11-29 07:43:56.245707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.334 [2024-11-29 07:43:56.245760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.334 [2024-11-29 07:43:56.245771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.334 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.594 BaseBdev2 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.594 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.594 [ 00:12:06.594 { 00:12:06.594 "name": "BaseBdev2", 00:12:06.594 "aliases": [ 00:12:06.594 "d1b4c368-b347-42d1-aca5-d2f0261c6b3f" 00:12:06.594 ], 00:12:06.594 "product_name": "Malloc disk", 00:12:06.594 "block_size": 512, 00:12:06.594 "num_blocks": 65536, 00:12:06.594 "uuid": "d1b4c368-b347-42d1-aca5-d2f0261c6b3f", 00:12:06.594 "assigned_rate_limits": { 00:12:06.594 "rw_ios_per_sec": 0, 00:12:06.594 "rw_mbytes_per_sec": 0, 00:12:06.594 "r_mbytes_per_sec": 0, 00:12:06.594 "w_mbytes_per_sec": 0 00:12:06.594 }, 00:12:06.594 "claimed": false, 00:12:06.594 "zoned": false, 00:12:06.594 "supported_io_types": { 00:12:06.594 "read": true, 00:12:06.594 "write": true, 00:12:06.594 "unmap": true, 00:12:06.594 "flush": true, 00:12:06.594 "reset": true, 00:12:06.595 "nvme_admin": false, 00:12:06.595 "nvme_io": false, 00:12:06.595 "nvme_io_md": false, 00:12:06.595 "write_zeroes": true, 00:12:06.595 "zcopy": true, 00:12:06.595 "get_zone_info": false, 00:12:06.595 "zone_management": false, 00:12:06.595 "zone_append": false, 00:12:06.595 "compare": false, 00:12:06.595 "compare_and_write": false, 00:12:06.595 "abort": true, 00:12:06.595 "seek_hole": false, 00:12:06.595 "seek_data": false, 00:12:06.595 "copy": true, 00:12:06.595 "nvme_iov_md": false 00:12:06.595 }, 00:12:06.595 "memory_domains": [ 00:12:06.595 { 00:12:06.595 "dma_device_id": "system", 00:12:06.595 "dma_device_type": 1 00:12:06.595 }, 00:12:06.595 { 00:12:06.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.595 "dma_device_type": 2 00:12:06.595 } 00:12:06.595 ], 00:12:06.595 "driver_specific": {} 00:12:06.595 } 00:12:06.595 ] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.595 BaseBdev3 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.595 [ 00:12:06.595 { 00:12:06.595 "name": "BaseBdev3", 00:12:06.595 "aliases": [ 00:12:06.595 "52a10ba5-a14c-4790-b6f7-b0b70ce43110" 00:12:06.595 ], 00:12:06.595 "product_name": "Malloc disk", 00:12:06.595 "block_size": 512, 00:12:06.595 "num_blocks": 65536, 00:12:06.595 "uuid": "52a10ba5-a14c-4790-b6f7-b0b70ce43110", 00:12:06.595 "assigned_rate_limits": { 00:12:06.595 "rw_ios_per_sec": 0, 00:12:06.595 "rw_mbytes_per_sec": 0, 00:12:06.595 "r_mbytes_per_sec": 0, 00:12:06.595 "w_mbytes_per_sec": 0 00:12:06.595 }, 00:12:06.595 "claimed": false, 00:12:06.595 "zoned": false, 00:12:06.595 "supported_io_types": { 00:12:06.595 "read": true, 00:12:06.595 "write": true, 00:12:06.595 "unmap": true, 00:12:06.595 "flush": true, 00:12:06.595 "reset": true, 00:12:06.595 "nvme_admin": false, 00:12:06.595 "nvme_io": false, 00:12:06.595 "nvme_io_md": false, 00:12:06.595 "write_zeroes": true, 00:12:06.595 "zcopy": true, 00:12:06.595 "get_zone_info": false, 00:12:06.595 "zone_management": false, 00:12:06.595 "zone_append": false, 00:12:06.595 "compare": false, 00:12:06.595 "compare_and_write": false, 00:12:06.595 "abort": true, 00:12:06.595 "seek_hole": false, 00:12:06.595 "seek_data": false, 00:12:06.595 "copy": true, 00:12:06.595 "nvme_iov_md": false 00:12:06.595 }, 00:12:06.595 "memory_domains": [ 00:12:06.595 { 00:12:06.595 "dma_device_id": "system", 00:12:06.595 "dma_device_type": 1 00:12:06.595 }, 00:12:06.595 { 00:12:06.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.595 "dma_device_type": 2 00:12:06.595 } 00:12:06.595 ], 00:12:06.595 "driver_specific": {} 00:12:06.595 } 00:12:06.595 ] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.595 BaseBdev4 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.595 [ 00:12:06.595 { 00:12:06.595 "name": "BaseBdev4", 00:12:06.595 "aliases": [ 00:12:06.595 "a2dd6985-ba83-4e69-8352-f47aa05b3e68" 00:12:06.595 ], 00:12:06.595 "product_name": "Malloc disk", 00:12:06.595 "block_size": 512, 00:12:06.595 "num_blocks": 65536, 00:12:06.595 "uuid": "a2dd6985-ba83-4e69-8352-f47aa05b3e68", 00:12:06.595 "assigned_rate_limits": { 00:12:06.595 "rw_ios_per_sec": 0, 00:12:06.595 "rw_mbytes_per_sec": 0, 00:12:06.595 "r_mbytes_per_sec": 0, 00:12:06.595 "w_mbytes_per_sec": 0 00:12:06.595 }, 00:12:06.595 "claimed": false, 00:12:06.595 "zoned": false, 00:12:06.595 "supported_io_types": { 00:12:06.595 "read": true, 00:12:06.595 "write": true, 00:12:06.595 "unmap": true, 00:12:06.595 "flush": true, 00:12:06.595 "reset": true, 00:12:06.595 "nvme_admin": false, 00:12:06.595 "nvme_io": false, 00:12:06.595 "nvme_io_md": false, 00:12:06.595 "write_zeroes": true, 00:12:06.595 "zcopy": true, 00:12:06.595 "get_zone_info": false, 00:12:06.595 "zone_management": false, 00:12:06.595 "zone_append": false, 00:12:06.595 "compare": false, 00:12:06.595 "compare_and_write": false, 00:12:06.595 "abort": true, 00:12:06.595 "seek_hole": false, 00:12:06.595 "seek_data": false, 00:12:06.595 "copy": true, 00:12:06.595 "nvme_iov_md": false 00:12:06.595 }, 00:12:06.595 "memory_domains": [ 00:12:06.595 { 00:12:06.595 "dma_device_id": "system", 00:12:06.595 "dma_device_type": 1 00:12:06.595 }, 00:12:06.595 { 00:12:06.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.595 "dma_device_type": 2 00:12:06.595 } 00:12:06.595 ], 00:12:06.595 "driver_specific": {} 00:12:06.595 } 00:12:06.595 ] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.595 [2024-11-29 07:43:56.517218] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:06.595 [2024-11-29 07:43:56.517280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:06.595 [2024-11-29 07:43:56.517299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.595 [2024-11-29 07:43:56.518980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.595 [2024-11-29 07:43:56.519030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.595 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.856 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.856 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.856 "name": "Existed_Raid", 00:12:06.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.856 "strip_size_kb": 0, 00:12:06.856 "state": "configuring", 00:12:06.856 "raid_level": "raid1", 00:12:06.856 "superblock": false, 00:12:06.856 "num_base_bdevs": 4, 00:12:06.856 "num_base_bdevs_discovered": 3, 00:12:06.856 "num_base_bdevs_operational": 4, 00:12:06.856 "base_bdevs_list": [ 00:12:06.856 { 00:12:06.856 "name": "BaseBdev1", 00:12:06.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.856 "is_configured": false, 00:12:06.856 "data_offset": 0, 00:12:06.856 "data_size": 0 00:12:06.856 }, 00:12:06.856 { 00:12:06.856 "name": "BaseBdev2", 00:12:06.856 "uuid": "d1b4c368-b347-42d1-aca5-d2f0261c6b3f", 00:12:06.856 "is_configured": true, 00:12:06.856 "data_offset": 0, 00:12:06.856 "data_size": 65536 00:12:06.856 }, 00:12:06.856 { 00:12:06.856 "name": "BaseBdev3", 00:12:06.856 "uuid": "52a10ba5-a14c-4790-b6f7-b0b70ce43110", 00:12:06.856 "is_configured": true, 00:12:06.856 "data_offset": 0, 00:12:06.856 "data_size": 65536 00:12:06.856 }, 00:12:06.856 { 00:12:06.856 "name": "BaseBdev4", 00:12:06.856 "uuid": "a2dd6985-ba83-4e69-8352-f47aa05b3e68", 00:12:06.856 "is_configured": true, 00:12:06.856 "data_offset": 0, 00:12:06.856 "data_size": 65536 00:12:06.856 } 00:12:06.856 ] 00:12:06.856 }' 00:12:06.856 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.856 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.115 [2024-11-29 07:43:56.892586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.115 "name": "Existed_Raid", 00:12:07.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.115 "strip_size_kb": 0, 00:12:07.115 "state": "configuring", 00:12:07.115 "raid_level": "raid1", 00:12:07.115 "superblock": false, 00:12:07.115 "num_base_bdevs": 4, 00:12:07.115 "num_base_bdevs_discovered": 2, 00:12:07.115 "num_base_bdevs_operational": 4, 00:12:07.115 "base_bdevs_list": [ 00:12:07.115 { 00:12:07.115 "name": "BaseBdev1", 00:12:07.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.115 "is_configured": false, 00:12:07.115 "data_offset": 0, 00:12:07.115 "data_size": 0 00:12:07.115 }, 00:12:07.115 { 00:12:07.115 "name": null, 00:12:07.115 "uuid": "d1b4c368-b347-42d1-aca5-d2f0261c6b3f", 00:12:07.115 "is_configured": false, 00:12:07.115 "data_offset": 0, 00:12:07.115 "data_size": 65536 00:12:07.115 }, 00:12:07.115 { 00:12:07.115 "name": "BaseBdev3", 00:12:07.115 "uuid": "52a10ba5-a14c-4790-b6f7-b0b70ce43110", 00:12:07.115 "is_configured": true, 00:12:07.115 "data_offset": 0, 00:12:07.115 "data_size": 65536 00:12:07.115 }, 00:12:07.115 { 00:12:07.115 "name": "BaseBdev4", 00:12:07.115 "uuid": "a2dd6985-ba83-4e69-8352-f47aa05b3e68", 00:12:07.115 "is_configured": true, 00:12:07.115 "data_offset": 0, 00:12:07.115 "data_size": 65536 00:12:07.115 } 00:12:07.115 ] 00:12:07.115 }' 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.115 07:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.686 [2024-11-29 07:43:57.408068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.686 BaseBdev1 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.686 [ 00:12:07.686 { 00:12:07.686 "name": "BaseBdev1", 00:12:07.686 "aliases": [ 00:12:07.686 "a8d2df76-2e8d-449e-b7b1-4f58498d4b8b" 00:12:07.686 ], 00:12:07.686 "product_name": "Malloc disk", 00:12:07.686 "block_size": 512, 00:12:07.686 "num_blocks": 65536, 00:12:07.686 "uuid": "a8d2df76-2e8d-449e-b7b1-4f58498d4b8b", 00:12:07.686 "assigned_rate_limits": { 00:12:07.686 "rw_ios_per_sec": 0, 00:12:07.686 "rw_mbytes_per_sec": 0, 00:12:07.686 "r_mbytes_per_sec": 0, 00:12:07.686 "w_mbytes_per_sec": 0 00:12:07.686 }, 00:12:07.686 "claimed": true, 00:12:07.686 "claim_type": "exclusive_write", 00:12:07.686 "zoned": false, 00:12:07.686 "supported_io_types": { 00:12:07.686 "read": true, 00:12:07.686 "write": true, 00:12:07.686 "unmap": true, 00:12:07.686 "flush": true, 00:12:07.686 "reset": true, 00:12:07.686 "nvme_admin": false, 00:12:07.686 "nvme_io": false, 00:12:07.686 "nvme_io_md": false, 00:12:07.686 "write_zeroes": true, 00:12:07.686 "zcopy": true, 00:12:07.686 "get_zone_info": false, 00:12:07.686 "zone_management": false, 00:12:07.686 "zone_append": false, 00:12:07.686 "compare": false, 00:12:07.686 "compare_and_write": false, 00:12:07.686 "abort": true, 00:12:07.686 "seek_hole": false, 00:12:07.686 "seek_data": false, 00:12:07.686 "copy": true, 00:12:07.686 "nvme_iov_md": false 00:12:07.686 }, 00:12:07.686 "memory_domains": [ 00:12:07.686 { 00:12:07.686 "dma_device_id": "system", 00:12:07.686 "dma_device_type": 1 00:12:07.686 }, 00:12:07.686 { 00:12:07.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.686 "dma_device_type": 2 00:12:07.686 } 00:12:07.686 ], 00:12:07.686 "driver_specific": {} 00:12:07.686 } 00:12:07.686 ] 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.686 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.686 "name": "Existed_Raid", 00:12:07.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.686 "strip_size_kb": 0, 00:12:07.686 "state": "configuring", 00:12:07.686 "raid_level": "raid1", 00:12:07.686 "superblock": false, 00:12:07.686 "num_base_bdevs": 4, 00:12:07.686 "num_base_bdevs_discovered": 3, 00:12:07.686 "num_base_bdevs_operational": 4, 00:12:07.686 "base_bdevs_list": [ 00:12:07.686 { 00:12:07.686 "name": "BaseBdev1", 00:12:07.686 "uuid": "a8d2df76-2e8d-449e-b7b1-4f58498d4b8b", 00:12:07.686 "is_configured": true, 00:12:07.686 "data_offset": 0, 00:12:07.686 "data_size": 65536 00:12:07.686 }, 00:12:07.686 { 00:12:07.686 "name": null, 00:12:07.686 "uuid": "d1b4c368-b347-42d1-aca5-d2f0261c6b3f", 00:12:07.686 "is_configured": false, 00:12:07.686 "data_offset": 0, 00:12:07.686 "data_size": 65536 00:12:07.686 }, 00:12:07.686 { 00:12:07.686 "name": "BaseBdev3", 00:12:07.686 "uuid": "52a10ba5-a14c-4790-b6f7-b0b70ce43110", 00:12:07.686 "is_configured": true, 00:12:07.686 "data_offset": 0, 00:12:07.686 "data_size": 65536 00:12:07.686 }, 00:12:07.686 { 00:12:07.686 "name": "BaseBdev4", 00:12:07.686 "uuid": "a2dd6985-ba83-4e69-8352-f47aa05b3e68", 00:12:07.686 "is_configured": true, 00:12:07.686 "data_offset": 0, 00:12:07.686 "data_size": 65536 00:12:07.686 } 00:12:07.686 ] 00:12:07.687 }' 00:12:07.687 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.687 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.256 [2024-11-29 07:43:57.927252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.256 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.257 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.257 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.257 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.257 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.257 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.257 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.257 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.257 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.257 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.257 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.257 07:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.257 07:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.257 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.257 "name": "Existed_Raid", 00:12:08.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.257 "strip_size_kb": 0, 00:12:08.257 "state": "configuring", 00:12:08.257 "raid_level": "raid1", 00:12:08.257 "superblock": false, 00:12:08.257 "num_base_bdevs": 4, 00:12:08.257 "num_base_bdevs_discovered": 2, 00:12:08.257 "num_base_bdevs_operational": 4, 00:12:08.257 "base_bdevs_list": [ 00:12:08.257 { 00:12:08.257 "name": "BaseBdev1", 00:12:08.257 "uuid": "a8d2df76-2e8d-449e-b7b1-4f58498d4b8b", 00:12:08.257 "is_configured": true, 00:12:08.257 "data_offset": 0, 00:12:08.257 "data_size": 65536 00:12:08.257 }, 00:12:08.257 { 00:12:08.257 "name": null, 00:12:08.257 "uuid": "d1b4c368-b347-42d1-aca5-d2f0261c6b3f", 00:12:08.257 "is_configured": false, 00:12:08.257 "data_offset": 0, 00:12:08.257 "data_size": 65536 00:12:08.257 }, 00:12:08.257 { 00:12:08.257 "name": null, 00:12:08.257 "uuid": "52a10ba5-a14c-4790-b6f7-b0b70ce43110", 00:12:08.257 "is_configured": false, 00:12:08.257 "data_offset": 0, 00:12:08.257 "data_size": 65536 00:12:08.257 }, 00:12:08.257 { 00:12:08.257 "name": "BaseBdev4", 00:12:08.257 "uuid": "a2dd6985-ba83-4e69-8352-f47aa05b3e68", 00:12:08.257 "is_configured": true, 00:12:08.257 "data_offset": 0, 00:12:08.257 "data_size": 65536 00:12:08.257 } 00:12:08.257 ] 00:12:08.257 }' 00:12:08.257 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.257 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.517 [2024-11-29 07:43:58.390500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.517 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.518 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.518 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.518 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.518 "name": "Existed_Raid", 00:12:08.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.518 "strip_size_kb": 0, 00:12:08.518 "state": "configuring", 00:12:08.518 "raid_level": "raid1", 00:12:08.518 "superblock": false, 00:12:08.518 "num_base_bdevs": 4, 00:12:08.518 "num_base_bdevs_discovered": 3, 00:12:08.518 "num_base_bdevs_operational": 4, 00:12:08.518 "base_bdevs_list": [ 00:12:08.518 { 00:12:08.518 "name": "BaseBdev1", 00:12:08.518 "uuid": "a8d2df76-2e8d-449e-b7b1-4f58498d4b8b", 00:12:08.518 "is_configured": true, 00:12:08.518 "data_offset": 0, 00:12:08.518 "data_size": 65536 00:12:08.518 }, 00:12:08.518 { 00:12:08.518 "name": null, 00:12:08.518 "uuid": "d1b4c368-b347-42d1-aca5-d2f0261c6b3f", 00:12:08.518 "is_configured": false, 00:12:08.518 "data_offset": 0, 00:12:08.518 "data_size": 65536 00:12:08.518 }, 00:12:08.518 { 00:12:08.518 "name": "BaseBdev3", 00:12:08.518 "uuid": "52a10ba5-a14c-4790-b6f7-b0b70ce43110", 00:12:08.518 "is_configured": true, 00:12:08.518 "data_offset": 0, 00:12:08.518 "data_size": 65536 00:12:08.518 }, 00:12:08.518 { 00:12:08.518 "name": "BaseBdev4", 00:12:08.518 "uuid": "a2dd6985-ba83-4e69-8352-f47aa05b3e68", 00:12:08.518 "is_configured": true, 00:12:08.518 "data_offset": 0, 00:12:08.518 "data_size": 65536 00:12:08.518 } 00:12:08.518 ] 00:12:08.518 }' 00:12:08.518 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.518 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.088 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.088 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.088 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.088 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.088 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.088 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:09.088 07:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:09.088 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.088 07:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.088 [2024-11-29 07:43:58.917655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.088 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.347 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.347 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.347 "name": "Existed_Raid", 00:12:09.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.347 "strip_size_kb": 0, 00:12:09.347 "state": "configuring", 00:12:09.347 "raid_level": "raid1", 00:12:09.347 "superblock": false, 00:12:09.347 "num_base_bdevs": 4, 00:12:09.347 "num_base_bdevs_discovered": 2, 00:12:09.347 "num_base_bdevs_operational": 4, 00:12:09.347 "base_bdevs_list": [ 00:12:09.347 { 00:12:09.347 "name": null, 00:12:09.347 "uuid": "a8d2df76-2e8d-449e-b7b1-4f58498d4b8b", 00:12:09.347 "is_configured": false, 00:12:09.347 "data_offset": 0, 00:12:09.347 "data_size": 65536 00:12:09.347 }, 00:12:09.347 { 00:12:09.347 "name": null, 00:12:09.347 "uuid": "d1b4c368-b347-42d1-aca5-d2f0261c6b3f", 00:12:09.347 "is_configured": false, 00:12:09.347 "data_offset": 0, 00:12:09.347 "data_size": 65536 00:12:09.347 }, 00:12:09.347 { 00:12:09.347 "name": "BaseBdev3", 00:12:09.347 "uuid": "52a10ba5-a14c-4790-b6f7-b0b70ce43110", 00:12:09.347 "is_configured": true, 00:12:09.347 "data_offset": 0, 00:12:09.347 "data_size": 65536 00:12:09.347 }, 00:12:09.347 { 00:12:09.347 "name": "BaseBdev4", 00:12:09.347 "uuid": "a2dd6985-ba83-4e69-8352-f47aa05b3e68", 00:12:09.347 "is_configured": true, 00:12:09.347 "data_offset": 0, 00:12:09.347 "data_size": 65536 00:12:09.347 } 00:12:09.347 ] 00:12:09.347 }' 00:12:09.347 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.347 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.606 [2024-11-29 07:43:59.514455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.606 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.865 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.865 "name": "Existed_Raid", 00:12:09.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.865 "strip_size_kb": 0, 00:12:09.865 "state": "configuring", 00:12:09.865 "raid_level": "raid1", 00:12:09.865 "superblock": false, 00:12:09.865 "num_base_bdevs": 4, 00:12:09.865 "num_base_bdevs_discovered": 3, 00:12:09.865 "num_base_bdevs_operational": 4, 00:12:09.865 "base_bdevs_list": [ 00:12:09.865 { 00:12:09.865 "name": null, 00:12:09.865 "uuid": "a8d2df76-2e8d-449e-b7b1-4f58498d4b8b", 00:12:09.865 "is_configured": false, 00:12:09.865 "data_offset": 0, 00:12:09.865 "data_size": 65536 00:12:09.865 }, 00:12:09.865 { 00:12:09.865 "name": "BaseBdev2", 00:12:09.865 "uuid": "d1b4c368-b347-42d1-aca5-d2f0261c6b3f", 00:12:09.865 "is_configured": true, 00:12:09.865 "data_offset": 0, 00:12:09.865 "data_size": 65536 00:12:09.865 }, 00:12:09.865 { 00:12:09.865 "name": "BaseBdev3", 00:12:09.865 "uuid": "52a10ba5-a14c-4790-b6f7-b0b70ce43110", 00:12:09.865 "is_configured": true, 00:12:09.865 "data_offset": 0, 00:12:09.865 "data_size": 65536 00:12:09.865 }, 00:12:09.865 { 00:12:09.865 "name": "BaseBdev4", 00:12:09.865 "uuid": "a2dd6985-ba83-4e69-8352-f47aa05b3e68", 00:12:09.865 "is_configured": true, 00:12:09.865 "data_offset": 0, 00:12:09.865 "data_size": 65536 00:12:09.865 } 00:12:09.865 ] 00:12:09.865 }' 00:12:09.865 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.865 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.124 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.124 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.124 07:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:10.124 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.124 07:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.124 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:10.124 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.124 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:10.124 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.124 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.124 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.124 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a8d2df76-2e8d-449e-b7b1-4f58498d4b8b 00:12:10.124 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.124 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.383 [2024-11-29 07:44:00.089779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:10.383 [2024-11-29 07:44:00.089833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:10.383 [2024-11-29 07:44:00.089842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:10.383 [2024-11-29 07:44:00.090141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:10.383 [2024-11-29 07:44:00.090309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:10.383 [2024-11-29 07:44:00.090325] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:10.383 [2024-11-29 07:44:00.090582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.383 NewBaseBdev 00:12:10.383 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.383 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:10.383 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:10.383 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.383 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:10.383 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.383 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.383 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.383 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.383 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.383 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.383 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.384 [ 00:12:10.384 { 00:12:10.384 "name": "NewBaseBdev", 00:12:10.384 "aliases": [ 00:12:10.384 "a8d2df76-2e8d-449e-b7b1-4f58498d4b8b" 00:12:10.384 ], 00:12:10.384 "product_name": "Malloc disk", 00:12:10.384 "block_size": 512, 00:12:10.384 "num_blocks": 65536, 00:12:10.384 "uuid": "a8d2df76-2e8d-449e-b7b1-4f58498d4b8b", 00:12:10.384 "assigned_rate_limits": { 00:12:10.384 "rw_ios_per_sec": 0, 00:12:10.384 "rw_mbytes_per_sec": 0, 00:12:10.384 "r_mbytes_per_sec": 0, 00:12:10.384 "w_mbytes_per_sec": 0 00:12:10.384 }, 00:12:10.384 "claimed": true, 00:12:10.384 "claim_type": "exclusive_write", 00:12:10.384 "zoned": false, 00:12:10.384 "supported_io_types": { 00:12:10.384 "read": true, 00:12:10.384 "write": true, 00:12:10.384 "unmap": true, 00:12:10.384 "flush": true, 00:12:10.384 "reset": true, 00:12:10.384 "nvme_admin": false, 00:12:10.384 "nvme_io": false, 00:12:10.384 "nvme_io_md": false, 00:12:10.384 "write_zeroes": true, 00:12:10.384 "zcopy": true, 00:12:10.384 "get_zone_info": false, 00:12:10.384 "zone_management": false, 00:12:10.384 "zone_append": false, 00:12:10.384 "compare": false, 00:12:10.384 "compare_and_write": false, 00:12:10.384 "abort": true, 00:12:10.384 "seek_hole": false, 00:12:10.384 "seek_data": false, 00:12:10.384 "copy": true, 00:12:10.384 "nvme_iov_md": false 00:12:10.384 }, 00:12:10.384 "memory_domains": [ 00:12:10.384 { 00:12:10.384 "dma_device_id": "system", 00:12:10.384 "dma_device_type": 1 00:12:10.384 }, 00:12:10.384 { 00:12:10.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.384 "dma_device_type": 2 00:12:10.384 } 00:12:10.384 ], 00:12:10.384 "driver_specific": {} 00:12:10.384 } 00:12:10.384 ] 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.384 "name": "Existed_Raid", 00:12:10.384 "uuid": "702021ce-207d-44b3-8b54-f6e7c780792b", 00:12:10.384 "strip_size_kb": 0, 00:12:10.384 "state": "online", 00:12:10.384 "raid_level": "raid1", 00:12:10.384 "superblock": false, 00:12:10.384 "num_base_bdevs": 4, 00:12:10.384 "num_base_bdevs_discovered": 4, 00:12:10.384 "num_base_bdevs_operational": 4, 00:12:10.384 "base_bdevs_list": [ 00:12:10.384 { 00:12:10.384 "name": "NewBaseBdev", 00:12:10.384 "uuid": "a8d2df76-2e8d-449e-b7b1-4f58498d4b8b", 00:12:10.384 "is_configured": true, 00:12:10.384 "data_offset": 0, 00:12:10.384 "data_size": 65536 00:12:10.384 }, 00:12:10.384 { 00:12:10.384 "name": "BaseBdev2", 00:12:10.384 "uuid": "d1b4c368-b347-42d1-aca5-d2f0261c6b3f", 00:12:10.384 "is_configured": true, 00:12:10.384 "data_offset": 0, 00:12:10.384 "data_size": 65536 00:12:10.384 }, 00:12:10.384 { 00:12:10.384 "name": "BaseBdev3", 00:12:10.384 "uuid": "52a10ba5-a14c-4790-b6f7-b0b70ce43110", 00:12:10.384 "is_configured": true, 00:12:10.384 "data_offset": 0, 00:12:10.384 "data_size": 65536 00:12:10.384 }, 00:12:10.384 { 00:12:10.384 "name": "BaseBdev4", 00:12:10.384 "uuid": "a2dd6985-ba83-4e69-8352-f47aa05b3e68", 00:12:10.384 "is_configured": true, 00:12:10.384 "data_offset": 0, 00:12:10.384 "data_size": 65536 00:12:10.384 } 00:12:10.384 ] 00:12:10.384 }' 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.384 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.954 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:10.954 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:10.954 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:10.954 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:10.954 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:10.954 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:10.954 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:10.954 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:10.954 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.954 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.954 [2024-11-29 07:44:00.613319] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.954 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.954 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:10.954 "name": "Existed_Raid", 00:12:10.954 "aliases": [ 00:12:10.954 "702021ce-207d-44b3-8b54-f6e7c780792b" 00:12:10.954 ], 00:12:10.954 "product_name": "Raid Volume", 00:12:10.954 "block_size": 512, 00:12:10.954 "num_blocks": 65536, 00:12:10.954 "uuid": "702021ce-207d-44b3-8b54-f6e7c780792b", 00:12:10.954 "assigned_rate_limits": { 00:12:10.954 "rw_ios_per_sec": 0, 00:12:10.954 "rw_mbytes_per_sec": 0, 00:12:10.954 "r_mbytes_per_sec": 0, 00:12:10.954 "w_mbytes_per_sec": 0 00:12:10.954 }, 00:12:10.954 "claimed": false, 00:12:10.954 "zoned": false, 00:12:10.954 "supported_io_types": { 00:12:10.954 "read": true, 00:12:10.954 "write": true, 00:12:10.954 "unmap": false, 00:12:10.954 "flush": false, 00:12:10.954 "reset": true, 00:12:10.954 "nvme_admin": false, 00:12:10.954 "nvme_io": false, 00:12:10.954 "nvme_io_md": false, 00:12:10.954 "write_zeroes": true, 00:12:10.954 "zcopy": false, 00:12:10.954 "get_zone_info": false, 00:12:10.954 "zone_management": false, 00:12:10.954 "zone_append": false, 00:12:10.954 "compare": false, 00:12:10.954 "compare_and_write": false, 00:12:10.954 "abort": false, 00:12:10.954 "seek_hole": false, 00:12:10.954 "seek_data": false, 00:12:10.954 "copy": false, 00:12:10.954 "nvme_iov_md": false 00:12:10.954 }, 00:12:10.954 "memory_domains": [ 00:12:10.954 { 00:12:10.954 "dma_device_id": "system", 00:12:10.954 "dma_device_type": 1 00:12:10.954 }, 00:12:10.954 { 00:12:10.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.954 "dma_device_type": 2 00:12:10.954 }, 00:12:10.954 { 00:12:10.954 "dma_device_id": "system", 00:12:10.954 "dma_device_type": 1 00:12:10.954 }, 00:12:10.954 { 00:12:10.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.954 "dma_device_type": 2 00:12:10.954 }, 00:12:10.954 { 00:12:10.954 "dma_device_id": "system", 00:12:10.954 "dma_device_type": 1 00:12:10.954 }, 00:12:10.954 { 00:12:10.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.954 "dma_device_type": 2 00:12:10.954 }, 00:12:10.954 { 00:12:10.954 "dma_device_id": "system", 00:12:10.954 "dma_device_type": 1 00:12:10.954 }, 00:12:10.954 { 00:12:10.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.954 "dma_device_type": 2 00:12:10.954 } 00:12:10.954 ], 00:12:10.954 "driver_specific": { 00:12:10.954 "raid": { 00:12:10.954 "uuid": "702021ce-207d-44b3-8b54-f6e7c780792b", 00:12:10.954 "strip_size_kb": 0, 00:12:10.954 "state": "online", 00:12:10.954 "raid_level": "raid1", 00:12:10.954 "superblock": false, 00:12:10.954 "num_base_bdevs": 4, 00:12:10.954 "num_base_bdevs_discovered": 4, 00:12:10.954 "num_base_bdevs_operational": 4, 00:12:10.954 "base_bdevs_list": [ 00:12:10.954 { 00:12:10.954 "name": "NewBaseBdev", 00:12:10.954 "uuid": "a8d2df76-2e8d-449e-b7b1-4f58498d4b8b", 00:12:10.954 "is_configured": true, 00:12:10.954 "data_offset": 0, 00:12:10.954 "data_size": 65536 00:12:10.954 }, 00:12:10.954 { 00:12:10.954 "name": "BaseBdev2", 00:12:10.954 "uuid": "d1b4c368-b347-42d1-aca5-d2f0261c6b3f", 00:12:10.954 "is_configured": true, 00:12:10.954 "data_offset": 0, 00:12:10.954 "data_size": 65536 00:12:10.954 }, 00:12:10.954 { 00:12:10.954 "name": "BaseBdev3", 00:12:10.954 "uuid": "52a10ba5-a14c-4790-b6f7-b0b70ce43110", 00:12:10.954 "is_configured": true, 00:12:10.954 "data_offset": 0, 00:12:10.954 "data_size": 65536 00:12:10.954 }, 00:12:10.954 { 00:12:10.954 "name": "BaseBdev4", 00:12:10.955 "uuid": "a2dd6985-ba83-4e69-8352-f47aa05b3e68", 00:12:10.955 "is_configured": true, 00:12:10.955 "data_offset": 0, 00:12:10.955 "data_size": 65536 00:12:10.955 } 00:12:10.955 ] 00:12:10.955 } 00:12:10.955 } 00:12:10.955 }' 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:10.955 BaseBdev2 00:12:10.955 BaseBdev3 00:12:10.955 BaseBdev4' 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.955 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.221 [2024-11-29 07:44:00.920388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.221 [2024-11-29 07:44:00.920419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.221 [2024-11-29 07:44:00.920515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.221 [2024-11-29 07:44:00.920813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.221 [2024-11-29 07:44:00.920834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72950 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72950 ']' 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72950 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72950 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.221 killing process with pid 72950 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72950' 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72950 00:12:11.221 [2024-11-29 07:44:00.951864] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:11.221 07:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72950 00:12:11.481 [2024-11-29 07:44:01.337443] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:12.864 00:12:12.864 real 0m11.323s 00:12:12.864 user 0m18.055s 00:12:12.864 sys 0m1.976s 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.864 ************************************ 00:12:12.864 END TEST raid_state_function_test 00:12:12.864 ************************************ 00:12:12.864 07:44:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:12.864 07:44:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:12.864 07:44:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.864 07:44:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.864 ************************************ 00:12:12.864 START TEST raid_state_function_test_sb 00:12:12.864 ************************************ 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73620 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73620' 00:12:12.864 Process raid pid: 73620 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73620 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73620 ']' 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.864 07:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.864 [2024-11-29 07:44:02.612349] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:12.864 [2024-11-29 07:44:02.612466] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.864 [2024-11-29 07:44:02.777677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.124 [2024-11-29 07:44:02.889574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.383 [2024-11-29 07:44:03.087049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.383 [2024-11-29 07:44:03.087091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.642 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.642 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:13.642 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.642 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.642 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.642 [2024-11-29 07:44:03.446089] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.642 [2024-11-29 07:44:03.446159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.642 [2024-11-29 07:44:03.446169] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.642 [2024-11-29 07:44:03.446179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.642 [2024-11-29 07:44:03.446185] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.642 [2024-11-29 07:44:03.446194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.642 [2024-11-29 07:44:03.446200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:13.642 [2024-11-29 07:44:03.446208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:13.642 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.642 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.642 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.642 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.643 "name": "Existed_Raid", 00:12:13.643 "uuid": "b3ef2c9a-2e6b-413f-859d-64bd5e383d8b", 00:12:13.643 "strip_size_kb": 0, 00:12:13.643 "state": "configuring", 00:12:13.643 "raid_level": "raid1", 00:12:13.643 "superblock": true, 00:12:13.643 "num_base_bdevs": 4, 00:12:13.643 "num_base_bdevs_discovered": 0, 00:12:13.643 "num_base_bdevs_operational": 4, 00:12:13.643 "base_bdevs_list": [ 00:12:13.643 { 00:12:13.643 "name": "BaseBdev1", 00:12:13.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.643 "is_configured": false, 00:12:13.643 "data_offset": 0, 00:12:13.643 "data_size": 0 00:12:13.643 }, 00:12:13.643 { 00:12:13.643 "name": "BaseBdev2", 00:12:13.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.643 "is_configured": false, 00:12:13.643 "data_offset": 0, 00:12:13.643 "data_size": 0 00:12:13.643 }, 00:12:13.643 { 00:12:13.643 "name": "BaseBdev3", 00:12:13.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.643 "is_configured": false, 00:12:13.643 "data_offset": 0, 00:12:13.643 "data_size": 0 00:12:13.643 }, 00:12:13.643 { 00:12:13.643 "name": "BaseBdev4", 00:12:13.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.643 "is_configured": false, 00:12:13.643 "data_offset": 0, 00:12:13.643 "data_size": 0 00:12:13.643 } 00:12:13.643 ] 00:12:13.643 }' 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.643 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.213 [2024-11-29 07:44:03.853330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:14.213 [2024-11-29 07:44:03.853373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.213 [2024-11-29 07:44:03.865294] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:14.213 [2024-11-29 07:44:03.865353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:14.213 [2024-11-29 07:44:03.865362] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:14.213 [2024-11-29 07:44:03.865372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:14.213 [2024-11-29 07:44:03.865378] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:14.213 [2024-11-29 07:44:03.865387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:14.213 [2024-11-29 07:44:03.865393] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:14.213 [2024-11-29 07:44:03.865402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.213 [2024-11-29 07:44:03.912004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.213 BaseBdev1 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.213 [ 00:12:14.213 { 00:12:14.213 "name": "BaseBdev1", 00:12:14.213 "aliases": [ 00:12:14.213 "337420a4-f8a8-43b5-aca0-210161a69248" 00:12:14.213 ], 00:12:14.213 "product_name": "Malloc disk", 00:12:14.213 "block_size": 512, 00:12:14.213 "num_blocks": 65536, 00:12:14.213 "uuid": "337420a4-f8a8-43b5-aca0-210161a69248", 00:12:14.213 "assigned_rate_limits": { 00:12:14.213 "rw_ios_per_sec": 0, 00:12:14.213 "rw_mbytes_per_sec": 0, 00:12:14.213 "r_mbytes_per_sec": 0, 00:12:14.213 "w_mbytes_per_sec": 0 00:12:14.213 }, 00:12:14.213 "claimed": true, 00:12:14.213 "claim_type": "exclusive_write", 00:12:14.213 "zoned": false, 00:12:14.213 "supported_io_types": { 00:12:14.213 "read": true, 00:12:14.213 "write": true, 00:12:14.213 "unmap": true, 00:12:14.213 "flush": true, 00:12:14.213 "reset": true, 00:12:14.213 "nvme_admin": false, 00:12:14.213 "nvme_io": false, 00:12:14.213 "nvme_io_md": false, 00:12:14.213 "write_zeroes": true, 00:12:14.213 "zcopy": true, 00:12:14.213 "get_zone_info": false, 00:12:14.213 "zone_management": false, 00:12:14.213 "zone_append": false, 00:12:14.213 "compare": false, 00:12:14.213 "compare_and_write": false, 00:12:14.213 "abort": true, 00:12:14.213 "seek_hole": false, 00:12:14.213 "seek_data": false, 00:12:14.213 "copy": true, 00:12:14.213 "nvme_iov_md": false 00:12:14.213 }, 00:12:14.213 "memory_domains": [ 00:12:14.213 { 00:12:14.213 "dma_device_id": "system", 00:12:14.213 "dma_device_type": 1 00:12:14.213 }, 00:12:14.213 { 00:12:14.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.213 "dma_device_type": 2 00:12:14.213 } 00:12:14.213 ], 00:12:14.213 "driver_specific": {} 00:12:14.213 } 00:12:14.213 ] 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.213 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.213 "name": "Existed_Raid", 00:12:14.213 "uuid": "2e27248d-78d7-417a-bc98-bbabee90eac6", 00:12:14.213 "strip_size_kb": 0, 00:12:14.213 "state": "configuring", 00:12:14.213 "raid_level": "raid1", 00:12:14.213 "superblock": true, 00:12:14.213 "num_base_bdevs": 4, 00:12:14.213 "num_base_bdevs_discovered": 1, 00:12:14.213 "num_base_bdevs_operational": 4, 00:12:14.213 "base_bdevs_list": [ 00:12:14.213 { 00:12:14.213 "name": "BaseBdev1", 00:12:14.213 "uuid": "337420a4-f8a8-43b5-aca0-210161a69248", 00:12:14.213 "is_configured": true, 00:12:14.213 "data_offset": 2048, 00:12:14.213 "data_size": 63488 00:12:14.213 }, 00:12:14.213 { 00:12:14.214 "name": "BaseBdev2", 00:12:14.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.214 "is_configured": false, 00:12:14.214 "data_offset": 0, 00:12:14.214 "data_size": 0 00:12:14.214 }, 00:12:14.214 { 00:12:14.214 "name": "BaseBdev3", 00:12:14.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.214 "is_configured": false, 00:12:14.214 "data_offset": 0, 00:12:14.214 "data_size": 0 00:12:14.214 }, 00:12:14.214 { 00:12:14.214 "name": "BaseBdev4", 00:12:14.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.214 "is_configured": false, 00:12:14.214 "data_offset": 0, 00:12:14.214 "data_size": 0 00:12:14.214 } 00:12:14.214 ] 00:12:14.214 }' 00:12:14.214 07:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.214 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.782 [2024-11-29 07:44:04.443150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:14.782 [2024-11-29 07:44:04.443206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.782 [2024-11-29 07:44:04.455168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.782 [2024-11-29 07:44:04.456990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:14.782 [2024-11-29 07:44:04.457035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:14.782 [2024-11-29 07:44:04.457045] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:14.782 [2024-11-29 07:44:04.457055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:14.782 [2024-11-29 07:44:04.457061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:14.782 [2024-11-29 07:44:04.457070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.782 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.782 "name": "Existed_Raid", 00:12:14.782 "uuid": "f3abc34d-da6c-4f26-8190-b013594586c0", 00:12:14.782 "strip_size_kb": 0, 00:12:14.782 "state": "configuring", 00:12:14.782 "raid_level": "raid1", 00:12:14.782 "superblock": true, 00:12:14.782 "num_base_bdevs": 4, 00:12:14.782 "num_base_bdevs_discovered": 1, 00:12:14.782 "num_base_bdevs_operational": 4, 00:12:14.782 "base_bdevs_list": [ 00:12:14.782 { 00:12:14.782 "name": "BaseBdev1", 00:12:14.782 "uuid": "337420a4-f8a8-43b5-aca0-210161a69248", 00:12:14.782 "is_configured": true, 00:12:14.782 "data_offset": 2048, 00:12:14.782 "data_size": 63488 00:12:14.782 }, 00:12:14.783 { 00:12:14.783 "name": "BaseBdev2", 00:12:14.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.783 "is_configured": false, 00:12:14.783 "data_offset": 0, 00:12:14.783 "data_size": 0 00:12:14.783 }, 00:12:14.783 { 00:12:14.783 "name": "BaseBdev3", 00:12:14.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.783 "is_configured": false, 00:12:14.783 "data_offset": 0, 00:12:14.783 "data_size": 0 00:12:14.783 }, 00:12:14.783 { 00:12:14.783 "name": "BaseBdev4", 00:12:14.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.783 "is_configured": false, 00:12:14.783 "data_offset": 0, 00:12:14.783 "data_size": 0 00:12:14.783 } 00:12:14.783 ] 00:12:14.783 }' 00:12:14.783 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.783 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.042 [2024-11-29 07:44:04.899376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.042 BaseBdev2 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.042 [ 00:12:15.042 { 00:12:15.042 "name": "BaseBdev2", 00:12:15.042 "aliases": [ 00:12:15.042 "f3420eeb-d777-43a3-9f4f-ea954992f251" 00:12:15.042 ], 00:12:15.042 "product_name": "Malloc disk", 00:12:15.042 "block_size": 512, 00:12:15.042 "num_blocks": 65536, 00:12:15.042 "uuid": "f3420eeb-d777-43a3-9f4f-ea954992f251", 00:12:15.042 "assigned_rate_limits": { 00:12:15.042 "rw_ios_per_sec": 0, 00:12:15.042 "rw_mbytes_per_sec": 0, 00:12:15.042 "r_mbytes_per_sec": 0, 00:12:15.042 "w_mbytes_per_sec": 0 00:12:15.042 }, 00:12:15.042 "claimed": true, 00:12:15.042 "claim_type": "exclusive_write", 00:12:15.042 "zoned": false, 00:12:15.042 "supported_io_types": { 00:12:15.042 "read": true, 00:12:15.042 "write": true, 00:12:15.042 "unmap": true, 00:12:15.042 "flush": true, 00:12:15.042 "reset": true, 00:12:15.042 "nvme_admin": false, 00:12:15.042 "nvme_io": false, 00:12:15.042 "nvme_io_md": false, 00:12:15.042 "write_zeroes": true, 00:12:15.042 "zcopy": true, 00:12:15.042 "get_zone_info": false, 00:12:15.042 "zone_management": false, 00:12:15.042 "zone_append": false, 00:12:15.042 "compare": false, 00:12:15.042 "compare_and_write": false, 00:12:15.042 "abort": true, 00:12:15.042 "seek_hole": false, 00:12:15.042 "seek_data": false, 00:12:15.042 "copy": true, 00:12:15.042 "nvme_iov_md": false 00:12:15.042 }, 00:12:15.042 "memory_domains": [ 00:12:15.042 { 00:12:15.042 "dma_device_id": "system", 00:12:15.042 "dma_device_type": 1 00:12:15.042 }, 00:12:15.042 { 00:12:15.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.042 "dma_device_type": 2 00:12:15.042 } 00:12:15.042 ], 00:12:15.042 "driver_specific": {} 00:12:15.042 } 00:12:15.042 ] 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.042 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.042 "name": "Existed_Raid", 00:12:15.043 "uuid": "f3abc34d-da6c-4f26-8190-b013594586c0", 00:12:15.043 "strip_size_kb": 0, 00:12:15.043 "state": "configuring", 00:12:15.043 "raid_level": "raid1", 00:12:15.043 "superblock": true, 00:12:15.043 "num_base_bdevs": 4, 00:12:15.043 "num_base_bdevs_discovered": 2, 00:12:15.043 "num_base_bdevs_operational": 4, 00:12:15.043 "base_bdevs_list": [ 00:12:15.043 { 00:12:15.043 "name": "BaseBdev1", 00:12:15.043 "uuid": "337420a4-f8a8-43b5-aca0-210161a69248", 00:12:15.043 "is_configured": true, 00:12:15.043 "data_offset": 2048, 00:12:15.043 "data_size": 63488 00:12:15.043 }, 00:12:15.043 { 00:12:15.043 "name": "BaseBdev2", 00:12:15.043 "uuid": "f3420eeb-d777-43a3-9f4f-ea954992f251", 00:12:15.043 "is_configured": true, 00:12:15.043 "data_offset": 2048, 00:12:15.043 "data_size": 63488 00:12:15.043 }, 00:12:15.043 { 00:12:15.043 "name": "BaseBdev3", 00:12:15.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.043 "is_configured": false, 00:12:15.043 "data_offset": 0, 00:12:15.043 "data_size": 0 00:12:15.043 }, 00:12:15.043 { 00:12:15.043 "name": "BaseBdev4", 00:12:15.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.043 "is_configured": false, 00:12:15.043 "data_offset": 0, 00:12:15.043 "data_size": 0 00:12:15.043 } 00:12:15.043 ] 00:12:15.043 }' 00:12:15.043 07:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.043 07:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.612 [2024-11-29 07:44:05.412028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.612 BaseBdev3 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.612 [ 00:12:15.612 { 00:12:15.612 "name": "BaseBdev3", 00:12:15.612 "aliases": [ 00:12:15.612 "88b6c5a9-77a6-4b86-95e0-c52083c49735" 00:12:15.612 ], 00:12:15.612 "product_name": "Malloc disk", 00:12:15.612 "block_size": 512, 00:12:15.612 "num_blocks": 65536, 00:12:15.612 "uuid": "88b6c5a9-77a6-4b86-95e0-c52083c49735", 00:12:15.612 "assigned_rate_limits": { 00:12:15.612 "rw_ios_per_sec": 0, 00:12:15.612 "rw_mbytes_per_sec": 0, 00:12:15.612 "r_mbytes_per_sec": 0, 00:12:15.612 "w_mbytes_per_sec": 0 00:12:15.612 }, 00:12:15.612 "claimed": true, 00:12:15.612 "claim_type": "exclusive_write", 00:12:15.612 "zoned": false, 00:12:15.612 "supported_io_types": { 00:12:15.612 "read": true, 00:12:15.612 "write": true, 00:12:15.612 "unmap": true, 00:12:15.612 "flush": true, 00:12:15.612 "reset": true, 00:12:15.612 "nvme_admin": false, 00:12:15.612 "nvme_io": false, 00:12:15.612 "nvme_io_md": false, 00:12:15.612 "write_zeroes": true, 00:12:15.612 "zcopy": true, 00:12:15.612 "get_zone_info": false, 00:12:15.612 "zone_management": false, 00:12:15.612 "zone_append": false, 00:12:15.612 "compare": false, 00:12:15.612 "compare_and_write": false, 00:12:15.612 "abort": true, 00:12:15.612 "seek_hole": false, 00:12:15.612 "seek_data": false, 00:12:15.612 "copy": true, 00:12:15.612 "nvme_iov_md": false 00:12:15.612 }, 00:12:15.612 "memory_domains": [ 00:12:15.612 { 00:12:15.612 "dma_device_id": "system", 00:12:15.612 "dma_device_type": 1 00:12:15.612 }, 00:12:15.612 { 00:12:15.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.612 "dma_device_type": 2 00:12:15.612 } 00:12:15.612 ], 00:12:15.612 "driver_specific": {} 00:12:15.612 } 00:12:15.612 ] 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.612 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.612 "name": "Existed_Raid", 00:12:15.612 "uuid": "f3abc34d-da6c-4f26-8190-b013594586c0", 00:12:15.612 "strip_size_kb": 0, 00:12:15.612 "state": "configuring", 00:12:15.612 "raid_level": "raid1", 00:12:15.612 "superblock": true, 00:12:15.612 "num_base_bdevs": 4, 00:12:15.612 "num_base_bdevs_discovered": 3, 00:12:15.612 "num_base_bdevs_operational": 4, 00:12:15.612 "base_bdevs_list": [ 00:12:15.612 { 00:12:15.612 "name": "BaseBdev1", 00:12:15.612 "uuid": "337420a4-f8a8-43b5-aca0-210161a69248", 00:12:15.612 "is_configured": true, 00:12:15.612 "data_offset": 2048, 00:12:15.612 "data_size": 63488 00:12:15.612 }, 00:12:15.612 { 00:12:15.612 "name": "BaseBdev2", 00:12:15.612 "uuid": "f3420eeb-d777-43a3-9f4f-ea954992f251", 00:12:15.612 "is_configured": true, 00:12:15.612 "data_offset": 2048, 00:12:15.612 "data_size": 63488 00:12:15.613 }, 00:12:15.613 { 00:12:15.613 "name": "BaseBdev3", 00:12:15.613 "uuid": "88b6c5a9-77a6-4b86-95e0-c52083c49735", 00:12:15.613 "is_configured": true, 00:12:15.613 "data_offset": 2048, 00:12:15.613 "data_size": 63488 00:12:15.613 }, 00:12:15.613 { 00:12:15.613 "name": "BaseBdev4", 00:12:15.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.613 "is_configured": false, 00:12:15.613 "data_offset": 0, 00:12:15.613 "data_size": 0 00:12:15.613 } 00:12:15.613 ] 00:12:15.613 }' 00:12:15.613 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.613 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.182 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:16.182 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.182 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.182 [2024-11-29 07:44:05.931353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:16.183 [2024-11-29 07:44:05.931610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:16.183 [2024-11-29 07:44:05.931625] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:16.183 [2024-11-29 07:44:05.931921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:16.183 BaseBdev4 00:12:16.183 [2024-11-29 07:44:05.932086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:16.183 [2024-11-29 07:44:05.932113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:16.183 [2024-11-29 07:44:05.932256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.183 [ 00:12:16.183 { 00:12:16.183 "name": "BaseBdev4", 00:12:16.183 "aliases": [ 00:12:16.183 "26e0bdec-bb08-40bf-9371-4c852a26f232" 00:12:16.183 ], 00:12:16.183 "product_name": "Malloc disk", 00:12:16.183 "block_size": 512, 00:12:16.183 "num_blocks": 65536, 00:12:16.183 "uuid": "26e0bdec-bb08-40bf-9371-4c852a26f232", 00:12:16.183 "assigned_rate_limits": { 00:12:16.183 "rw_ios_per_sec": 0, 00:12:16.183 "rw_mbytes_per_sec": 0, 00:12:16.183 "r_mbytes_per_sec": 0, 00:12:16.183 "w_mbytes_per_sec": 0 00:12:16.183 }, 00:12:16.183 "claimed": true, 00:12:16.183 "claim_type": "exclusive_write", 00:12:16.183 "zoned": false, 00:12:16.183 "supported_io_types": { 00:12:16.183 "read": true, 00:12:16.183 "write": true, 00:12:16.183 "unmap": true, 00:12:16.183 "flush": true, 00:12:16.183 "reset": true, 00:12:16.183 "nvme_admin": false, 00:12:16.183 "nvme_io": false, 00:12:16.183 "nvme_io_md": false, 00:12:16.183 "write_zeroes": true, 00:12:16.183 "zcopy": true, 00:12:16.183 "get_zone_info": false, 00:12:16.183 "zone_management": false, 00:12:16.183 "zone_append": false, 00:12:16.183 "compare": false, 00:12:16.183 "compare_and_write": false, 00:12:16.183 "abort": true, 00:12:16.183 "seek_hole": false, 00:12:16.183 "seek_data": false, 00:12:16.183 "copy": true, 00:12:16.183 "nvme_iov_md": false 00:12:16.183 }, 00:12:16.183 "memory_domains": [ 00:12:16.183 { 00:12:16.183 "dma_device_id": "system", 00:12:16.183 "dma_device_type": 1 00:12:16.183 }, 00:12:16.183 { 00:12:16.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.183 "dma_device_type": 2 00:12:16.183 } 00:12:16.183 ], 00:12:16.183 "driver_specific": {} 00:12:16.183 } 00:12:16.183 ] 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.183 07:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.183 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.183 "name": "Existed_Raid", 00:12:16.183 "uuid": "f3abc34d-da6c-4f26-8190-b013594586c0", 00:12:16.183 "strip_size_kb": 0, 00:12:16.183 "state": "online", 00:12:16.183 "raid_level": "raid1", 00:12:16.183 "superblock": true, 00:12:16.183 "num_base_bdevs": 4, 00:12:16.183 "num_base_bdevs_discovered": 4, 00:12:16.183 "num_base_bdevs_operational": 4, 00:12:16.183 "base_bdevs_list": [ 00:12:16.183 { 00:12:16.183 "name": "BaseBdev1", 00:12:16.183 "uuid": "337420a4-f8a8-43b5-aca0-210161a69248", 00:12:16.183 "is_configured": true, 00:12:16.183 "data_offset": 2048, 00:12:16.183 "data_size": 63488 00:12:16.183 }, 00:12:16.183 { 00:12:16.183 "name": "BaseBdev2", 00:12:16.183 "uuid": "f3420eeb-d777-43a3-9f4f-ea954992f251", 00:12:16.183 "is_configured": true, 00:12:16.183 "data_offset": 2048, 00:12:16.183 "data_size": 63488 00:12:16.183 }, 00:12:16.183 { 00:12:16.183 "name": "BaseBdev3", 00:12:16.183 "uuid": "88b6c5a9-77a6-4b86-95e0-c52083c49735", 00:12:16.183 "is_configured": true, 00:12:16.183 "data_offset": 2048, 00:12:16.183 "data_size": 63488 00:12:16.183 }, 00:12:16.183 { 00:12:16.183 "name": "BaseBdev4", 00:12:16.183 "uuid": "26e0bdec-bb08-40bf-9371-4c852a26f232", 00:12:16.183 "is_configured": true, 00:12:16.183 "data_offset": 2048, 00:12:16.183 "data_size": 63488 00:12:16.183 } 00:12:16.183 ] 00:12:16.183 }' 00:12:16.183 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.183 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.753 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:16.753 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:16.753 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.753 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.753 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.753 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.753 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:16.753 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.753 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.753 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.753 [2024-11-29 07:44:06.410947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.753 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.753 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:16.753 "name": "Existed_Raid", 00:12:16.753 "aliases": [ 00:12:16.753 "f3abc34d-da6c-4f26-8190-b013594586c0" 00:12:16.753 ], 00:12:16.753 "product_name": "Raid Volume", 00:12:16.753 "block_size": 512, 00:12:16.753 "num_blocks": 63488, 00:12:16.753 "uuid": "f3abc34d-da6c-4f26-8190-b013594586c0", 00:12:16.753 "assigned_rate_limits": { 00:12:16.753 "rw_ios_per_sec": 0, 00:12:16.753 "rw_mbytes_per_sec": 0, 00:12:16.753 "r_mbytes_per_sec": 0, 00:12:16.753 "w_mbytes_per_sec": 0 00:12:16.753 }, 00:12:16.753 "claimed": false, 00:12:16.753 "zoned": false, 00:12:16.753 "supported_io_types": { 00:12:16.753 "read": true, 00:12:16.753 "write": true, 00:12:16.753 "unmap": false, 00:12:16.753 "flush": false, 00:12:16.753 "reset": true, 00:12:16.753 "nvme_admin": false, 00:12:16.753 "nvme_io": false, 00:12:16.753 "nvme_io_md": false, 00:12:16.753 "write_zeroes": true, 00:12:16.753 "zcopy": false, 00:12:16.753 "get_zone_info": false, 00:12:16.753 "zone_management": false, 00:12:16.753 "zone_append": false, 00:12:16.753 "compare": false, 00:12:16.753 "compare_and_write": false, 00:12:16.753 "abort": false, 00:12:16.753 "seek_hole": false, 00:12:16.753 "seek_data": false, 00:12:16.753 "copy": false, 00:12:16.753 "nvme_iov_md": false 00:12:16.753 }, 00:12:16.753 "memory_domains": [ 00:12:16.753 { 00:12:16.753 "dma_device_id": "system", 00:12:16.753 "dma_device_type": 1 00:12:16.753 }, 00:12:16.753 { 00:12:16.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.753 "dma_device_type": 2 00:12:16.753 }, 00:12:16.753 { 00:12:16.753 "dma_device_id": "system", 00:12:16.753 "dma_device_type": 1 00:12:16.753 }, 00:12:16.753 { 00:12:16.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.753 "dma_device_type": 2 00:12:16.753 }, 00:12:16.753 { 00:12:16.753 "dma_device_id": "system", 00:12:16.753 "dma_device_type": 1 00:12:16.753 }, 00:12:16.753 { 00:12:16.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.753 "dma_device_type": 2 00:12:16.753 }, 00:12:16.753 { 00:12:16.753 "dma_device_id": "system", 00:12:16.753 "dma_device_type": 1 00:12:16.753 }, 00:12:16.753 { 00:12:16.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.753 "dma_device_type": 2 00:12:16.753 } 00:12:16.753 ], 00:12:16.753 "driver_specific": { 00:12:16.753 "raid": { 00:12:16.753 "uuid": "f3abc34d-da6c-4f26-8190-b013594586c0", 00:12:16.753 "strip_size_kb": 0, 00:12:16.753 "state": "online", 00:12:16.753 "raid_level": "raid1", 00:12:16.753 "superblock": true, 00:12:16.753 "num_base_bdevs": 4, 00:12:16.753 "num_base_bdevs_discovered": 4, 00:12:16.753 "num_base_bdevs_operational": 4, 00:12:16.753 "base_bdevs_list": [ 00:12:16.753 { 00:12:16.753 "name": "BaseBdev1", 00:12:16.753 "uuid": "337420a4-f8a8-43b5-aca0-210161a69248", 00:12:16.754 "is_configured": true, 00:12:16.754 "data_offset": 2048, 00:12:16.754 "data_size": 63488 00:12:16.754 }, 00:12:16.754 { 00:12:16.754 "name": "BaseBdev2", 00:12:16.754 "uuid": "f3420eeb-d777-43a3-9f4f-ea954992f251", 00:12:16.754 "is_configured": true, 00:12:16.754 "data_offset": 2048, 00:12:16.754 "data_size": 63488 00:12:16.754 }, 00:12:16.754 { 00:12:16.754 "name": "BaseBdev3", 00:12:16.754 "uuid": "88b6c5a9-77a6-4b86-95e0-c52083c49735", 00:12:16.754 "is_configured": true, 00:12:16.754 "data_offset": 2048, 00:12:16.754 "data_size": 63488 00:12:16.754 }, 00:12:16.754 { 00:12:16.754 "name": "BaseBdev4", 00:12:16.754 "uuid": "26e0bdec-bb08-40bf-9371-4c852a26f232", 00:12:16.754 "is_configured": true, 00:12:16.754 "data_offset": 2048, 00:12:16.754 "data_size": 63488 00:12:16.754 } 00:12:16.754 ] 00:12:16.754 } 00:12:16.754 } 00:12:16.754 }' 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:16.754 BaseBdev2 00:12:16.754 BaseBdev3 00:12:16.754 BaseBdev4' 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.754 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.014 [2024-11-29 07:44:06.754083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.014 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.014 "name": "Existed_Raid", 00:12:17.014 "uuid": "f3abc34d-da6c-4f26-8190-b013594586c0", 00:12:17.014 "strip_size_kb": 0, 00:12:17.014 "state": "online", 00:12:17.014 "raid_level": "raid1", 00:12:17.014 "superblock": true, 00:12:17.014 "num_base_bdevs": 4, 00:12:17.014 "num_base_bdevs_discovered": 3, 00:12:17.014 "num_base_bdevs_operational": 3, 00:12:17.014 "base_bdevs_list": [ 00:12:17.014 { 00:12:17.014 "name": null, 00:12:17.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.014 "is_configured": false, 00:12:17.014 "data_offset": 0, 00:12:17.014 "data_size": 63488 00:12:17.014 }, 00:12:17.014 { 00:12:17.014 "name": "BaseBdev2", 00:12:17.014 "uuid": "f3420eeb-d777-43a3-9f4f-ea954992f251", 00:12:17.014 "is_configured": true, 00:12:17.014 "data_offset": 2048, 00:12:17.014 "data_size": 63488 00:12:17.014 }, 00:12:17.014 { 00:12:17.014 "name": "BaseBdev3", 00:12:17.014 "uuid": "88b6c5a9-77a6-4b86-95e0-c52083c49735", 00:12:17.014 "is_configured": true, 00:12:17.014 "data_offset": 2048, 00:12:17.014 "data_size": 63488 00:12:17.015 }, 00:12:17.015 { 00:12:17.015 "name": "BaseBdev4", 00:12:17.015 "uuid": "26e0bdec-bb08-40bf-9371-4c852a26f232", 00:12:17.015 "is_configured": true, 00:12:17.015 "data_offset": 2048, 00:12:17.015 "data_size": 63488 00:12:17.015 } 00:12:17.015 ] 00:12:17.015 }' 00:12:17.015 07:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.015 07:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.584 [2024-11-29 07:44:07.313195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.584 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.584 [2024-11-29 07:44:07.449761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.845 [2024-11-29 07:44:07.592079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:17.845 [2024-11-29 07:44:07.592203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.845 [2024-11-29 07:44:07.688479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.845 [2024-11-29 07:44:07.688540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.845 [2024-11-29 07:44:07.688552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.845 BaseBdev2 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.845 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.106 [ 00:12:18.106 { 00:12:18.106 "name": "BaseBdev2", 00:12:18.106 "aliases": [ 00:12:18.106 "b2167229-b3bf-4a92-af62-58a2e588201a" 00:12:18.106 ], 00:12:18.106 "product_name": "Malloc disk", 00:12:18.106 "block_size": 512, 00:12:18.106 "num_blocks": 65536, 00:12:18.106 "uuid": "b2167229-b3bf-4a92-af62-58a2e588201a", 00:12:18.106 "assigned_rate_limits": { 00:12:18.106 "rw_ios_per_sec": 0, 00:12:18.106 "rw_mbytes_per_sec": 0, 00:12:18.106 "r_mbytes_per_sec": 0, 00:12:18.106 "w_mbytes_per_sec": 0 00:12:18.106 }, 00:12:18.106 "claimed": false, 00:12:18.106 "zoned": false, 00:12:18.106 "supported_io_types": { 00:12:18.106 "read": true, 00:12:18.106 "write": true, 00:12:18.106 "unmap": true, 00:12:18.106 "flush": true, 00:12:18.106 "reset": true, 00:12:18.106 "nvme_admin": false, 00:12:18.106 "nvme_io": false, 00:12:18.106 "nvme_io_md": false, 00:12:18.106 "write_zeroes": true, 00:12:18.106 "zcopy": true, 00:12:18.106 "get_zone_info": false, 00:12:18.106 "zone_management": false, 00:12:18.106 "zone_append": false, 00:12:18.106 "compare": false, 00:12:18.106 "compare_and_write": false, 00:12:18.106 "abort": true, 00:12:18.106 "seek_hole": false, 00:12:18.106 "seek_data": false, 00:12:18.106 "copy": true, 00:12:18.106 "nvme_iov_md": false 00:12:18.106 }, 00:12:18.106 "memory_domains": [ 00:12:18.106 { 00:12:18.106 "dma_device_id": "system", 00:12:18.106 "dma_device_type": 1 00:12:18.106 }, 00:12:18.106 { 00:12:18.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.106 "dma_device_type": 2 00:12:18.106 } 00:12:18.106 ], 00:12:18.106 "driver_specific": {} 00:12:18.106 } 00:12:18.106 ] 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.106 BaseBdev3 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.106 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.107 [ 00:12:18.107 { 00:12:18.107 "name": "BaseBdev3", 00:12:18.107 "aliases": [ 00:12:18.107 "20e2608a-2d32-4418-88ec-8254eb3f29ac" 00:12:18.107 ], 00:12:18.107 "product_name": "Malloc disk", 00:12:18.107 "block_size": 512, 00:12:18.107 "num_blocks": 65536, 00:12:18.107 "uuid": "20e2608a-2d32-4418-88ec-8254eb3f29ac", 00:12:18.107 "assigned_rate_limits": { 00:12:18.107 "rw_ios_per_sec": 0, 00:12:18.107 "rw_mbytes_per_sec": 0, 00:12:18.107 "r_mbytes_per_sec": 0, 00:12:18.107 "w_mbytes_per_sec": 0 00:12:18.107 }, 00:12:18.107 "claimed": false, 00:12:18.107 "zoned": false, 00:12:18.107 "supported_io_types": { 00:12:18.107 "read": true, 00:12:18.107 "write": true, 00:12:18.107 "unmap": true, 00:12:18.107 "flush": true, 00:12:18.107 "reset": true, 00:12:18.107 "nvme_admin": false, 00:12:18.107 "nvme_io": false, 00:12:18.107 "nvme_io_md": false, 00:12:18.107 "write_zeroes": true, 00:12:18.107 "zcopy": true, 00:12:18.107 "get_zone_info": false, 00:12:18.107 "zone_management": false, 00:12:18.107 "zone_append": false, 00:12:18.107 "compare": false, 00:12:18.107 "compare_and_write": false, 00:12:18.107 "abort": true, 00:12:18.107 "seek_hole": false, 00:12:18.107 "seek_data": false, 00:12:18.107 "copy": true, 00:12:18.107 "nvme_iov_md": false 00:12:18.107 }, 00:12:18.107 "memory_domains": [ 00:12:18.107 { 00:12:18.107 "dma_device_id": "system", 00:12:18.107 "dma_device_type": 1 00:12:18.107 }, 00:12:18.107 { 00:12:18.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.107 "dma_device_type": 2 00:12:18.107 } 00:12:18.107 ], 00:12:18.107 "driver_specific": {} 00:12:18.107 } 00:12:18.107 ] 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.107 BaseBdev4 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.107 [ 00:12:18.107 { 00:12:18.107 "name": "BaseBdev4", 00:12:18.107 "aliases": [ 00:12:18.107 "9b2cc844-5992-4455-b49d-98abb6057c8d" 00:12:18.107 ], 00:12:18.107 "product_name": "Malloc disk", 00:12:18.107 "block_size": 512, 00:12:18.107 "num_blocks": 65536, 00:12:18.107 "uuid": "9b2cc844-5992-4455-b49d-98abb6057c8d", 00:12:18.107 "assigned_rate_limits": { 00:12:18.107 "rw_ios_per_sec": 0, 00:12:18.107 "rw_mbytes_per_sec": 0, 00:12:18.107 "r_mbytes_per_sec": 0, 00:12:18.107 "w_mbytes_per_sec": 0 00:12:18.107 }, 00:12:18.107 "claimed": false, 00:12:18.107 "zoned": false, 00:12:18.107 "supported_io_types": { 00:12:18.107 "read": true, 00:12:18.107 "write": true, 00:12:18.107 "unmap": true, 00:12:18.107 "flush": true, 00:12:18.107 "reset": true, 00:12:18.107 "nvme_admin": false, 00:12:18.107 "nvme_io": false, 00:12:18.107 "nvme_io_md": false, 00:12:18.107 "write_zeroes": true, 00:12:18.107 "zcopy": true, 00:12:18.107 "get_zone_info": false, 00:12:18.107 "zone_management": false, 00:12:18.107 "zone_append": false, 00:12:18.107 "compare": false, 00:12:18.107 "compare_and_write": false, 00:12:18.107 "abort": true, 00:12:18.107 "seek_hole": false, 00:12:18.107 "seek_data": false, 00:12:18.107 "copy": true, 00:12:18.107 "nvme_iov_md": false 00:12:18.107 }, 00:12:18.107 "memory_domains": [ 00:12:18.107 { 00:12:18.107 "dma_device_id": "system", 00:12:18.107 "dma_device_type": 1 00:12:18.107 }, 00:12:18.107 { 00:12:18.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.107 "dma_device_type": 2 00:12:18.107 } 00:12:18.107 ], 00:12:18.107 "driver_specific": {} 00:12:18.107 } 00:12:18.107 ] 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.107 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.107 [2024-11-29 07:44:07.971485] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:18.107 [2024-11-29 07:44:07.971535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:18.107 [2024-11-29 07:44:07.971556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.107 [2024-11-29 07:44:07.973424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.107 [2024-11-29 07:44:07.973493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.108 07:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.108 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.108 "name": "Existed_Raid", 00:12:18.108 "uuid": "eee0f8a9-8d23-4b79-b207-8f2c4e5b16a8", 00:12:18.108 "strip_size_kb": 0, 00:12:18.108 "state": "configuring", 00:12:18.108 "raid_level": "raid1", 00:12:18.108 "superblock": true, 00:12:18.108 "num_base_bdevs": 4, 00:12:18.108 "num_base_bdevs_discovered": 3, 00:12:18.108 "num_base_bdevs_operational": 4, 00:12:18.108 "base_bdevs_list": [ 00:12:18.108 { 00:12:18.108 "name": "BaseBdev1", 00:12:18.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.108 "is_configured": false, 00:12:18.108 "data_offset": 0, 00:12:18.108 "data_size": 0 00:12:18.108 }, 00:12:18.108 { 00:12:18.108 "name": "BaseBdev2", 00:12:18.108 "uuid": "b2167229-b3bf-4a92-af62-58a2e588201a", 00:12:18.108 "is_configured": true, 00:12:18.108 "data_offset": 2048, 00:12:18.108 "data_size": 63488 00:12:18.108 }, 00:12:18.108 { 00:12:18.108 "name": "BaseBdev3", 00:12:18.108 "uuid": "20e2608a-2d32-4418-88ec-8254eb3f29ac", 00:12:18.108 "is_configured": true, 00:12:18.108 "data_offset": 2048, 00:12:18.108 "data_size": 63488 00:12:18.108 }, 00:12:18.108 { 00:12:18.108 "name": "BaseBdev4", 00:12:18.108 "uuid": "9b2cc844-5992-4455-b49d-98abb6057c8d", 00:12:18.108 "is_configured": true, 00:12:18.108 "data_offset": 2048, 00:12:18.108 "data_size": 63488 00:12:18.108 } 00:12:18.108 ] 00:12:18.108 }' 00:12:18.108 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.108 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.679 [2024-11-29 07:44:08.422723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.679 "name": "Existed_Raid", 00:12:18.679 "uuid": "eee0f8a9-8d23-4b79-b207-8f2c4e5b16a8", 00:12:18.679 "strip_size_kb": 0, 00:12:18.679 "state": "configuring", 00:12:18.679 "raid_level": "raid1", 00:12:18.679 "superblock": true, 00:12:18.679 "num_base_bdevs": 4, 00:12:18.679 "num_base_bdevs_discovered": 2, 00:12:18.679 "num_base_bdevs_operational": 4, 00:12:18.679 "base_bdevs_list": [ 00:12:18.679 { 00:12:18.679 "name": "BaseBdev1", 00:12:18.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.679 "is_configured": false, 00:12:18.679 "data_offset": 0, 00:12:18.679 "data_size": 0 00:12:18.679 }, 00:12:18.679 { 00:12:18.679 "name": null, 00:12:18.679 "uuid": "b2167229-b3bf-4a92-af62-58a2e588201a", 00:12:18.679 "is_configured": false, 00:12:18.679 "data_offset": 0, 00:12:18.679 "data_size": 63488 00:12:18.679 }, 00:12:18.679 { 00:12:18.679 "name": "BaseBdev3", 00:12:18.679 "uuid": "20e2608a-2d32-4418-88ec-8254eb3f29ac", 00:12:18.679 "is_configured": true, 00:12:18.679 "data_offset": 2048, 00:12:18.679 "data_size": 63488 00:12:18.679 }, 00:12:18.679 { 00:12:18.679 "name": "BaseBdev4", 00:12:18.679 "uuid": "9b2cc844-5992-4455-b49d-98abb6057c8d", 00:12:18.679 "is_configured": true, 00:12:18.679 "data_offset": 2048, 00:12:18.679 "data_size": 63488 00:12:18.679 } 00:12:18.679 ] 00:12:18.679 }' 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.679 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.250 [2024-11-29 07:44:08.965830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.250 BaseBdev1 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.250 07:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.250 [ 00:12:19.250 { 00:12:19.250 "name": "BaseBdev1", 00:12:19.250 "aliases": [ 00:12:19.250 "d0b66763-caf1-493c-b796-3d5050c4adb2" 00:12:19.250 ], 00:12:19.250 "product_name": "Malloc disk", 00:12:19.250 "block_size": 512, 00:12:19.250 "num_blocks": 65536, 00:12:19.250 "uuid": "d0b66763-caf1-493c-b796-3d5050c4adb2", 00:12:19.250 "assigned_rate_limits": { 00:12:19.250 "rw_ios_per_sec": 0, 00:12:19.250 "rw_mbytes_per_sec": 0, 00:12:19.250 "r_mbytes_per_sec": 0, 00:12:19.250 "w_mbytes_per_sec": 0 00:12:19.250 }, 00:12:19.250 "claimed": true, 00:12:19.250 "claim_type": "exclusive_write", 00:12:19.250 "zoned": false, 00:12:19.250 "supported_io_types": { 00:12:19.250 "read": true, 00:12:19.250 "write": true, 00:12:19.250 "unmap": true, 00:12:19.250 "flush": true, 00:12:19.250 "reset": true, 00:12:19.250 "nvme_admin": false, 00:12:19.250 "nvme_io": false, 00:12:19.250 "nvme_io_md": false, 00:12:19.250 "write_zeroes": true, 00:12:19.250 "zcopy": true, 00:12:19.250 "get_zone_info": false, 00:12:19.250 "zone_management": false, 00:12:19.250 "zone_append": false, 00:12:19.250 "compare": false, 00:12:19.250 "compare_and_write": false, 00:12:19.250 "abort": true, 00:12:19.250 "seek_hole": false, 00:12:19.250 "seek_data": false, 00:12:19.250 "copy": true, 00:12:19.250 "nvme_iov_md": false 00:12:19.250 }, 00:12:19.250 "memory_domains": [ 00:12:19.250 { 00:12:19.250 "dma_device_id": "system", 00:12:19.250 "dma_device_type": 1 00:12:19.250 }, 00:12:19.250 { 00:12:19.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.250 "dma_device_type": 2 00:12:19.250 } 00:12:19.250 ], 00:12:19.250 "driver_specific": {} 00:12:19.250 } 00:12:19.250 ] 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.250 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.251 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.251 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.251 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.251 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.251 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.251 "name": "Existed_Raid", 00:12:19.251 "uuid": "eee0f8a9-8d23-4b79-b207-8f2c4e5b16a8", 00:12:19.251 "strip_size_kb": 0, 00:12:19.251 "state": "configuring", 00:12:19.251 "raid_level": "raid1", 00:12:19.251 "superblock": true, 00:12:19.251 "num_base_bdevs": 4, 00:12:19.251 "num_base_bdevs_discovered": 3, 00:12:19.251 "num_base_bdevs_operational": 4, 00:12:19.251 "base_bdevs_list": [ 00:12:19.251 { 00:12:19.251 "name": "BaseBdev1", 00:12:19.251 "uuid": "d0b66763-caf1-493c-b796-3d5050c4adb2", 00:12:19.251 "is_configured": true, 00:12:19.251 "data_offset": 2048, 00:12:19.251 "data_size": 63488 00:12:19.251 }, 00:12:19.251 { 00:12:19.251 "name": null, 00:12:19.251 "uuid": "b2167229-b3bf-4a92-af62-58a2e588201a", 00:12:19.251 "is_configured": false, 00:12:19.251 "data_offset": 0, 00:12:19.251 "data_size": 63488 00:12:19.251 }, 00:12:19.251 { 00:12:19.251 "name": "BaseBdev3", 00:12:19.251 "uuid": "20e2608a-2d32-4418-88ec-8254eb3f29ac", 00:12:19.251 "is_configured": true, 00:12:19.251 "data_offset": 2048, 00:12:19.251 "data_size": 63488 00:12:19.251 }, 00:12:19.251 { 00:12:19.251 "name": "BaseBdev4", 00:12:19.251 "uuid": "9b2cc844-5992-4455-b49d-98abb6057c8d", 00:12:19.251 "is_configured": true, 00:12:19.251 "data_offset": 2048, 00:12:19.251 "data_size": 63488 00:12:19.251 } 00:12:19.251 ] 00:12:19.251 }' 00:12:19.251 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.251 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.511 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.511 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:19.511 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.511 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.511 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.771 [2024-11-29 07:44:09.465090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.771 "name": "Existed_Raid", 00:12:19.771 "uuid": "eee0f8a9-8d23-4b79-b207-8f2c4e5b16a8", 00:12:19.771 "strip_size_kb": 0, 00:12:19.771 "state": "configuring", 00:12:19.771 "raid_level": "raid1", 00:12:19.771 "superblock": true, 00:12:19.771 "num_base_bdevs": 4, 00:12:19.771 "num_base_bdevs_discovered": 2, 00:12:19.771 "num_base_bdevs_operational": 4, 00:12:19.771 "base_bdevs_list": [ 00:12:19.771 { 00:12:19.771 "name": "BaseBdev1", 00:12:19.771 "uuid": "d0b66763-caf1-493c-b796-3d5050c4adb2", 00:12:19.771 "is_configured": true, 00:12:19.771 "data_offset": 2048, 00:12:19.771 "data_size": 63488 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "name": null, 00:12:19.771 "uuid": "b2167229-b3bf-4a92-af62-58a2e588201a", 00:12:19.771 "is_configured": false, 00:12:19.771 "data_offset": 0, 00:12:19.771 "data_size": 63488 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "name": null, 00:12:19.771 "uuid": "20e2608a-2d32-4418-88ec-8254eb3f29ac", 00:12:19.771 "is_configured": false, 00:12:19.771 "data_offset": 0, 00:12:19.771 "data_size": 63488 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "name": "BaseBdev4", 00:12:19.771 "uuid": "9b2cc844-5992-4455-b49d-98abb6057c8d", 00:12:19.771 "is_configured": true, 00:12:19.771 "data_offset": 2048, 00:12:19.771 "data_size": 63488 00:12:19.771 } 00:12:19.771 ] 00:12:19.771 }' 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.771 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.031 [2024-11-29 07:44:09.892347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.031 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.031 "name": "Existed_Raid", 00:12:20.031 "uuid": "eee0f8a9-8d23-4b79-b207-8f2c4e5b16a8", 00:12:20.031 "strip_size_kb": 0, 00:12:20.031 "state": "configuring", 00:12:20.031 "raid_level": "raid1", 00:12:20.031 "superblock": true, 00:12:20.031 "num_base_bdevs": 4, 00:12:20.031 "num_base_bdevs_discovered": 3, 00:12:20.031 "num_base_bdevs_operational": 4, 00:12:20.031 "base_bdevs_list": [ 00:12:20.031 { 00:12:20.031 "name": "BaseBdev1", 00:12:20.031 "uuid": "d0b66763-caf1-493c-b796-3d5050c4adb2", 00:12:20.031 "is_configured": true, 00:12:20.031 "data_offset": 2048, 00:12:20.031 "data_size": 63488 00:12:20.032 }, 00:12:20.032 { 00:12:20.032 "name": null, 00:12:20.032 "uuid": "b2167229-b3bf-4a92-af62-58a2e588201a", 00:12:20.032 "is_configured": false, 00:12:20.032 "data_offset": 0, 00:12:20.032 "data_size": 63488 00:12:20.032 }, 00:12:20.032 { 00:12:20.032 "name": "BaseBdev3", 00:12:20.032 "uuid": "20e2608a-2d32-4418-88ec-8254eb3f29ac", 00:12:20.032 "is_configured": true, 00:12:20.032 "data_offset": 2048, 00:12:20.032 "data_size": 63488 00:12:20.032 }, 00:12:20.032 { 00:12:20.032 "name": "BaseBdev4", 00:12:20.032 "uuid": "9b2cc844-5992-4455-b49d-98abb6057c8d", 00:12:20.032 "is_configured": true, 00:12:20.032 "data_offset": 2048, 00:12:20.032 "data_size": 63488 00:12:20.032 } 00:12:20.032 ] 00:12:20.032 }' 00:12:20.032 07:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.032 07:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.600 [2024-11-29 07:44:10.407500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.600 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.863 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.863 "name": "Existed_Raid", 00:12:20.863 "uuid": "eee0f8a9-8d23-4b79-b207-8f2c4e5b16a8", 00:12:20.863 "strip_size_kb": 0, 00:12:20.863 "state": "configuring", 00:12:20.863 "raid_level": "raid1", 00:12:20.863 "superblock": true, 00:12:20.863 "num_base_bdevs": 4, 00:12:20.863 "num_base_bdevs_discovered": 2, 00:12:20.863 "num_base_bdevs_operational": 4, 00:12:20.863 "base_bdevs_list": [ 00:12:20.863 { 00:12:20.863 "name": null, 00:12:20.863 "uuid": "d0b66763-caf1-493c-b796-3d5050c4adb2", 00:12:20.863 "is_configured": false, 00:12:20.863 "data_offset": 0, 00:12:20.863 "data_size": 63488 00:12:20.863 }, 00:12:20.863 { 00:12:20.863 "name": null, 00:12:20.863 "uuid": "b2167229-b3bf-4a92-af62-58a2e588201a", 00:12:20.863 "is_configured": false, 00:12:20.863 "data_offset": 0, 00:12:20.863 "data_size": 63488 00:12:20.863 }, 00:12:20.863 { 00:12:20.863 "name": "BaseBdev3", 00:12:20.863 "uuid": "20e2608a-2d32-4418-88ec-8254eb3f29ac", 00:12:20.863 "is_configured": true, 00:12:20.863 "data_offset": 2048, 00:12:20.863 "data_size": 63488 00:12:20.863 }, 00:12:20.863 { 00:12:20.863 "name": "BaseBdev4", 00:12:20.863 "uuid": "9b2cc844-5992-4455-b49d-98abb6057c8d", 00:12:20.863 "is_configured": true, 00:12:20.863 "data_offset": 2048, 00:12:20.863 "data_size": 63488 00:12:20.863 } 00:12:20.863 ] 00:12:20.863 }' 00:12:20.863 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.863 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.123 [2024-11-29 07:44:10.973029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.123 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.124 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.124 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.124 07:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.124 07:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.124 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.124 "name": "Existed_Raid", 00:12:21.124 "uuid": "eee0f8a9-8d23-4b79-b207-8f2c4e5b16a8", 00:12:21.124 "strip_size_kb": 0, 00:12:21.124 "state": "configuring", 00:12:21.124 "raid_level": "raid1", 00:12:21.124 "superblock": true, 00:12:21.124 "num_base_bdevs": 4, 00:12:21.124 "num_base_bdevs_discovered": 3, 00:12:21.124 "num_base_bdevs_operational": 4, 00:12:21.124 "base_bdevs_list": [ 00:12:21.124 { 00:12:21.124 "name": null, 00:12:21.124 "uuid": "d0b66763-caf1-493c-b796-3d5050c4adb2", 00:12:21.124 "is_configured": false, 00:12:21.124 "data_offset": 0, 00:12:21.124 "data_size": 63488 00:12:21.124 }, 00:12:21.124 { 00:12:21.124 "name": "BaseBdev2", 00:12:21.124 "uuid": "b2167229-b3bf-4a92-af62-58a2e588201a", 00:12:21.124 "is_configured": true, 00:12:21.124 "data_offset": 2048, 00:12:21.124 "data_size": 63488 00:12:21.124 }, 00:12:21.124 { 00:12:21.124 "name": "BaseBdev3", 00:12:21.124 "uuid": "20e2608a-2d32-4418-88ec-8254eb3f29ac", 00:12:21.124 "is_configured": true, 00:12:21.124 "data_offset": 2048, 00:12:21.124 "data_size": 63488 00:12:21.124 }, 00:12:21.124 { 00:12:21.124 "name": "BaseBdev4", 00:12:21.124 "uuid": "9b2cc844-5992-4455-b49d-98abb6057c8d", 00:12:21.124 "is_configured": true, 00:12:21.124 "data_offset": 2048, 00:12:21.124 "data_size": 63488 00:12:21.124 } 00:12:21.124 ] 00:12:21.124 }' 00:12:21.124 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.124 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d0b66763-caf1-493c-b796-3d5050c4adb2 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.694 [2024-11-29 07:44:11.579804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:21.694 [2024-11-29 07:44:11.580078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:21.694 [2024-11-29 07:44:11.580095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.694 [2024-11-29 07:44:11.580407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:21.694 [2024-11-29 07:44:11.580589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:21.694 [2024-11-29 07:44:11.580607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:21.694 NewBaseBdev 00:12:21.694 [2024-11-29 07:44:11.580740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.694 [ 00:12:21.694 { 00:12:21.694 "name": "NewBaseBdev", 00:12:21.694 "aliases": [ 00:12:21.694 "d0b66763-caf1-493c-b796-3d5050c4adb2" 00:12:21.694 ], 00:12:21.694 "product_name": "Malloc disk", 00:12:21.694 "block_size": 512, 00:12:21.694 "num_blocks": 65536, 00:12:21.694 "uuid": "d0b66763-caf1-493c-b796-3d5050c4adb2", 00:12:21.694 "assigned_rate_limits": { 00:12:21.694 "rw_ios_per_sec": 0, 00:12:21.694 "rw_mbytes_per_sec": 0, 00:12:21.694 "r_mbytes_per_sec": 0, 00:12:21.694 "w_mbytes_per_sec": 0 00:12:21.694 }, 00:12:21.694 "claimed": true, 00:12:21.694 "claim_type": "exclusive_write", 00:12:21.694 "zoned": false, 00:12:21.694 "supported_io_types": { 00:12:21.694 "read": true, 00:12:21.694 "write": true, 00:12:21.694 "unmap": true, 00:12:21.694 "flush": true, 00:12:21.694 "reset": true, 00:12:21.694 "nvme_admin": false, 00:12:21.694 "nvme_io": false, 00:12:21.694 "nvme_io_md": false, 00:12:21.694 "write_zeroes": true, 00:12:21.694 "zcopy": true, 00:12:21.694 "get_zone_info": false, 00:12:21.694 "zone_management": false, 00:12:21.694 "zone_append": false, 00:12:21.694 "compare": false, 00:12:21.694 "compare_and_write": false, 00:12:21.694 "abort": true, 00:12:21.694 "seek_hole": false, 00:12:21.694 "seek_data": false, 00:12:21.694 "copy": true, 00:12:21.694 "nvme_iov_md": false 00:12:21.694 }, 00:12:21.694 "memory_domains": [ 00:12:21.694 { 00:12:21.694 "dma_device_id": "system", 00:12:21.694 "dma_device_type": 1 00:12:21.694 }, 00:12:21.694 { 00:12:21.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.694 "dma_device_type": 2 00:12:21.694 } 00:12:21.694 ], 00:12:21.694 "driver_specific": {} 00:12:21.694 } 00:12:21.694 ] 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.694 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.954 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.954 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.954 "name": "Existed_Raid", 00:12:21.954 "uuid": "eee0f8a9-8d23-4b79-b207-8f2c4e5b16a8", 00:12:21.954 "strip_size_kb": 0, 00:12:21.954 "state": "online", 00:12:21.954 "raid_level": "raid1", 00:12:21.954 "superblock": true, 00:12:21.954 "num_base_bdevs": 4, 00:12:21.954 "num_base_bdevs_discovered": 4, 00:12:21.954 "num_base_bdevs_operational": 4, 00:12:21.954 "base_bdevs_list": [ 00:12:21.954 { 00:12:21.954 "name": "NewBaseBdev", 00:12:21.954 "uuid": "d0b66763-caf1-493c-b796-3d5050c4adb2", 00:12:21.954 "is_configured": true, 00:12:21.954 "data_offset": 2048, 00:12:21.954 "data_size": 63488 00:12:21.954 }, 00:12:21.954 { 00:12:21.954 "name": "BaseBdev2", 00:12:21.954 "uuid": "b2167229-b3bf-4a92-af62-58a2e588201a", 00:12:21.954 "is_configured": true, 00:12:21.954 "data_offset": 2048, 00:12:21.954 "data_size": 63488 00:12:21.954 }, 00:12:21.954 { 00:12:21.954 "name": "BaseBdev3", 00:12:21.954 "uuid": "20e2608a-2d32-4418-88ec-8254eb3f29ac", 00:12:21.955 "is_configured": true, 00:12:21.955 "data_offset": 2048, 00:12:21.955 "data_size": 63488 00:12:21.955 }, 00:12:21.955 { 00:12:21.955 "name": "BaseBdev4", 00:12:21.955 "uuid": "9b2cc844-5992-4455-b49d-98abb6057c8d", 00:12:21.955 "is_configured": true, 00:12:21.955 "data_offset": 2048, 00:12:21.955 "data_size": 63488 00:12:21.955 } 00:12:21.955 ] 00:12:21.955 }' 00:12:21.955 07:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.955 07:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.215 [2024-11-29 07:44:12.063435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.215 "name": "Existed_Raid", 00:12:22.215 "aliases": [ 00:12:22.215 "eee0f8a9-8d23-4b79-b207-8f2c4e5b16a8" 00:12:22.215 ], 00:12:22.215 "product_name": "Raid Volume", 00:12:22.215 "block_size": 512, 00:12:22.215 "num_blocks": 63488, 00:12:22.215 "uuid": "eee0f8a9-8d23-4b79-b207-8f2c4e5b16a8", 00:12:22.215 "assigned_rate_limits": { 00:12:22.215 "rw_ios_per_sec": 0, 00:12:22.215 "rw_mbytes_per_sec": 0, 00:12:22.215 "r_mbytes_per_sec": 0, 00:12:22.215 "w_mbytes_per_sec": 0 00:12:22.215 }, 00:12:22.215 "claimed": false, 00:12:22.215 "zoned": false, 00:12:22.215 "supported_io_types": { 00:12:22.215 "read": true, 00:12:22.215 "write": true, 00:12:22.215 "unmap": false, 00:12:22.215 "flush": false, 00:12:22.215 "reset": true, 00:12:22.215 "nvme_admin": false, 00:12:22.215 "nvme_io": false, 00:12:22.215 "nvme_io_md": false, 00:12:22.215 "write_zeroes": true, 00:12:22.215 "zcopy": false, 00:12:22.215 "get_zone_info": false, 00:12:22.215 "zone_management": false, 00:12:22.215 "zone_append": false, 00:12:22.215 "compare": false, 00:12:22.215 "compare_and_write": false, 00:12:22.215 "abort": false, 00:12:22.215 "seek_hole": false, 00:12:22.215 "seek_data": false, 00:12:22.215 "copy": false, 00:12:22.215 "nvme_iov_md": false 00:12:22.215 }, 00:12:22.215 "memory_domains": [ 00:12:22.215 { 00:12:22.215 "dma_device_id": "system", 00:12:22.215 "dma_device_type": 1 00:12:22.215 }, 00:12:22.215 { 00:12:22.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.215 "dma_device_type": 2 00:12:22.215 }, 00:12:22.215 { 00:12:22.215 "dma_device_id": "system", 00:12:22.215 "dma_device_type": 1 00:12:22.215 }, 00:12:22.215 { 00:12:22.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.215 "dma_device_type": 2 00:12:22.215 }, 00:12:22.215 { 00:12:22.215 "dma_device_id": "system", 00:12:22.215 "dma_device_type": 1 00:12:22.215 }, 00:12:22.215 { 00:12:22.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.215 "dma_device_type": 2 00:12:22.215 }, 00:12:22.215 { 00:12:22.215 "dma_device_id": "system", 00:12:22.215 "dma_device_type": 1 00:12:22.215 }, 00:12:22.215 { 00:12:22.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.215 "dma_device_type": 2 00:12:22.215 } 00:12:22.215 ], 00:12:22.215 "driver_specific": { 00:12:22.215 "raid": { 00:12:22.215 "uuid": "eee0f8a9-8d23-4b79-b207-8f2c4e5b16a8", 00:12:22.215 "strip_size_kb": 0, 00:12:22.215 "state": "online", 00:12:22.215 "raid_level": "raid1", 00:12:22.215 "superblock": true, 00:12:22.215 "num_base_bdevs": 4, 00:12:22.215 "num_base_bdevs_discovered": 4, 00:12:22.215 "num_base_bdevs_operational": 4, 00:12:22.215 "base_bdevs_list": [ 00:12:22.215 { 00:12:22.215 "name": "NewBaseBdev", 00:12:22.215 "uuid": "d0b66763-caf1-493c-b796-3d5050c4adb2", 00:12:22.215 "is_configured": true, 00:12:22.215 "data_offset": 2048, 00:12:22.215 "data_size": 63488 00:12:22.215 }, 00:12:22.215 { 00:12:22.215 "name": "BaseBdev2", 00:12:22.215 "uuid": "b2167229-b3bf-4a92-af62-58a2e588201a", 00:12:22.215 "is_configured": true, 00:12:22.215 "data_offset": 2048, 00:12:22.215 "data_size": 63488 00:12:22.215 }, 00:12:22.215 { 00:12:22.215 "name": "BaseBdev3", 00:12:22.215 "uuid": "20e2608a-2d32-4418-88ec-8254eb3f29ac", 00:12:22.215 "is_configured": true, 00:12:22.215 "data_offset": 2048, 00:12:22.215 "data_size": 63488 00:12:22.215 }, 00:12:22.215 { 00:12:22.215 "name": "BaseBdev4", 00:12:22.215 "uuid": "9b2cc844-5992-4455-b49d-98abb6057c8d", 00:12:22.215 "is_configured": true, 00:12:22.215 "data_offset": 2048, 00:12:22.215 "data_size": 63488 00:12:22.215 } 00:12:22.215 ] 00:12:22.215 } 00:12:22.215 } 00:12:22.215 }' 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.215 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:22.215 BaseBdev2 00:12:22.215 BaseBdev3 00:12:22.215 BaseBdev4' 00:12:22.216 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.476 [2024-11-29 07:44:12.394486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:22.476 [2024-11-29 07:44:12.394577] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.476 [2024-11-29 07:44:12.394675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.476 [2024-11-29 07:44:12.395001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.476 [2024-11-29 07:44:12.395017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73620 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73620 ']' 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73620 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.476 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73620 00:12:22.736 killing process with pid 73620 00:12:22.736 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.736 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.736 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73620' 00:12:22.736 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73620 00:12:22.736 [2024-11-29 07:44:12.442829] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:22.736 07:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73620 00:12:22.996 [2024-11-29 07:44:12.831242] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.378 ************************************ 00:12:24.379 END TEST raid_state_function_test_sb 00:12:24.379 ************************************ 00:12:24.379 07:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:24.379 00:12:24.379 real 0m11.412s 00:12:24.379 user 0m18.204s 00:12:24.379 sys 0m2.012s 00:12:24.379 07:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.379 07:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.379 07:44:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:24.379 07:44:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:24.379 07:44:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.379 07:44:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.379 ************************************ 00:12:24.379 START TEST raid_superblock_test 00:12:24.379 ************************************ 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:24.379 07:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:24.379 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:24.379 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:24.379 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74286 00:12:24.379 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:24.379 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74286 00:12:24.379 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74286 ']' 00:12:24.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.379 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.379 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.379 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.379 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.379 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.379 [2024-11-29 07:44:14.084833] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:24.379 [2024-11-29 07:44:14.085031] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74286 ] 00:12:24.379 [2024-11-29 07:44:14.249902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.638 [2024-11-29 07:44:14.356237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.638 [2024-11-29 07:44:14.549439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.638 [2024-11-29 07:44:14.549578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.209 malloc1 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.209 [2024-11-29 07:44:14.963264] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:25.209 [2024-11-29 07:44:14.963380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.209 [2024-11-29 07:44:14.963419] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:25.209 [2024-11-29 07:44:14.963447] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.209 [2024-11-29 07:44:14.965540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.209 [2024-11-29 07:44:14.965612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:25.209 pt1 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.209 07:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.209 malloc2 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.209 [2024-11-29 07:44:15.017151] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:25.209 [2024-11-29 07:44:15.017200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.209 [2024-11-29 07:44:15.017251] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:25.209 [2024-11-29 07:44:15.017259] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.209 [2024-11-29 07:44:15.019248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.209 [2024-11-29 07:44:15.019332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:25.209 pt2 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.209 malloc3 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.209 [2024-11-29 07:44:15.097839] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:25.209 [2024-11-29 07:44:15.097934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.209 [2024-11-29 07:44:15.097974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:25.209 [2024-11-29 07:44:15.098002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.209 [2024-11-29 07:44:15.099980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.209 [2024-11-29 07:44:15.100051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:25.209 pt3 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.209 malloc4 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.209 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.468 [2024-11-29 07:44:15.154511] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:25.468 [2024-11-29 07:44:15.154640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.468 [2024-11-29 07:44:15.154680] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:25.469 [2024-11-29 07:44:15.154710] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.469 [2024-11-29 07:44:15.156807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.469 [2024-11-29 07:44:15.156879] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:25.469 pt4 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.469 [2024-11-29 07:44:15.166511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:25.469 [2024-11-29 07:44:15.168303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:25.469 [2024-11-29 07:44:15.168405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:25.469 [2024-11-29 07:44:15.168504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:25.469 [2024-11-29 07:44:15.168727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:25.469 [2024-11-29 07:44:15.168779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.469 [2024-11-29 07:44:15.169066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:25.469 [2024-11-29 07:44:15.169310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:25.469 [2024-11-29 07:44:15.169360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:25.469 [2024-11-29 07:44:15.169545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.469 "name": "raid_bdev1", 00:12:25.469 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:25.469 "strip_size_kb": 0, 00:12:25.469 "state": "online", 00:12:25.469 "raid_level": "raid1", 00:12:25.469 "superblock": true, 00:12:25.469 "num_base_bdevs": 4, 00:12:25.469 "num_base_bdevs_discovered": 4, 00:12:25.469 "num_base_bdevs_operational": 4, 00:12:25.469 "base_bdevs_list": [ 00:12:25.469 { 00:12:25.469 "name": "pt1", 00:12:25.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.469 "is_configured": true, 00:12:25.469 "data_offset": 2048, 00:12:25.469 "data_size": 63488 00:12:25.469 }, 00:12:25.469 { 00:12:25.469 "name": "pt2", 00:12:25.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.469 "is_configured": true, 00:12:25.469 "data_offset": 2048, 00:12:25.469 "data_size": 63488 00:12:25.469 }, 00:12:25.469 { 00:12:25.469 "name": "pt3", 00:12:25.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.469 "is_configured": true, 00:12:25.469 "data_offset": 2048, 00:12:25.469 "data_size": 63488 00:12:25.469 }, 00:12:25.469 { 00:12:25.469 "name": "pt4", 00:12:25.469 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.469 "is_configured": true, 00:12:25.469 "data_offset": 2048, 00:12:25.469 "data_size": 63488 00:12:25.469 } 00:12:25.469 ] 00:12:25.469 }' 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.469 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.729 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:25.729 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:25.729 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.729 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.729 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.729 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.729 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.729 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.729 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.729 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.729 [2024-11-29 07:44:15.638049] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.729 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.989 "name": "raid_bdev1", 00:12:25.989 "aliases": [ 00:12:25.989 "048c5644-d68f-43cf-b14b-740d24e21b2e" 00:12:25.989 ], 00:12:25.989 "product_name": "Raid Volume", 00:12:25.989 "block_size": 512, 00:12:25.989 "num_blocks": 63488, 00:12:25.989 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:25.989 "assigned_rate_limits": { 00:12:25.989 "rw_ios_per_sec": 0, 00:12:25.989 "rw_mbytes_per_sec": 0, 00:12:25.989 "r_mbytes_per_sec": 0, 00:12:25.989 "w_mbytes_per_sec": 0 00:12:25.989 }, 00:12:25.989 "claimed": false, 00:12:25.989 "zoned": false, 00:12:25.989 "supported_io_types": { 00:12:25.989 "read": true, 00:12:25.989 "write": true, 00:12:25.989 "unmap": false, 00:12:25.989 "flush": false, 00:12:25.989 "reset": true, 00:12:25.989 "nvme_admin": false, 00:12:25.989 "nvme_io": false, 00:12:25.989 "nvme_io_md": false, 00:12:25.989 "write_zeroes": true, 00:12:25.989 "zcopy": false, 00:12:25.989 "get_zone_info": false, 00:12:25.989 "zone_management": false, 00:12:25.989 "zone_append": false, 00:12:25.989 "compare": false, 00:12:25.989 "compare_and_write": false, 00:12:25.989 "abort": false, 00:12:25.989 "seek_hole": false, 00:12:25.989 "seek_data": false, 00:12:25.989 "copy": false, 00:12:25.989 "nvme_iov_md": false 00:12:25.989 }, 00:12:25.989 "memory_domains": [ 00:12:25.989 { 00:12:25.989 "dma_device_id": "system", 00:12:25.989 "dma_device_type": 1 00:12:25.989 }, 00:12:25.989 { 00:12:25.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.989 "dma_device_type": 2 00:12:25.989 }, 00:12:25.989 { 00:12:25.989 "dma_device_id": "system", 00:12:25.989 "dma_device_type": 1 00:12:25.989 }, 00:12:25.989 { 00:12:25.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.989 "dma_device_type": 2 00:12:25.989 }, 00:12:25.989 { 00:12:25.989 "dma_device_id": "system", 00:12:25.989 "dma_device_type": 1 00:12:25.989 }, 00:12:25.989 { 00:12:25.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.989 "dma_device_type": 2 00:12:25.989 }, 00:12:25.989 { 00:12:25.989 "dma_device_id": "system", 00:12:25.989 "dma_device_type": 1 00:12:25.989 }, 00:12:25.989 { 00:12:25.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.989 "dma_device_type": 2 00:12:25.989 } 00:12:25.989 ], 00:12:25.989 "driver_specific": { 00:12:25.989 "raid": { 00:12:25.989 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:25.989 "strip_size_kb": 0, 00:12:25.989 "state": "online", 00:12:25.989 "raid_level": "raid1", 00:12:25.989 "superblock": true, 00:12:25.989 "num_base_bdevs": 4, 00:12:25.989 "num_base_bdevs_discovered": 4, 00:12:25.989 "num_base_bdevs_operational": 4, 00:12:25.989 "base_bdevs_list": [ 00:12:25.989 { 00:12:25.989 "name": "pt1", 00:12:25.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.989 "is_configured": true, 00:12:25.989 "data_offset": 2048, 00:12:25.989 "data_size": 63488 00:12:25.989 }, 00:12:25.989 { 00:12:25.989 "name": "pt2", 00:12:25.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.989 "is_configured": true, 00:12:25.989 "data_offset": 2048, 00:12:25.989 "data_size": 63488 00:12:25.989 }, 00:12:25.989 { 00:12:25.989 "name": "pt3", 00:12:25.989 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.989 "is_configured": true, 00:12:25.989 "data_offset": 2048, 00:12:25.989 "data_size": 63488 00:12:25.989 }, 00:12:25.989 { 00:12:25.989 "name": "pt4", 00:12:25.989 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.989 "is_configured": true, 00:12:25.989 "data_offset": 2048, 00:12:25.989 "data_size": 63488 00:12:25.989 } 00:12:25.989 ] 00:12:25.989 } 00:12:25.989 } 00:12:25.989 }' 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:25.989 pt2 00:12:25.989 pt3 00:12:25.989 pt4' 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.989 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.990 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.990 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.990 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.990 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:25.990 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.990 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.990 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.251 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.251 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.251 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.251 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:26.251 07:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.251 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.251 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.251 [2024-11-29 07:44:15.969455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.251 07:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=048c5644-d68f-43cf-b14b-740d24e21b2e 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 048c5644-d68f-43cf-b14b-740d24e21b2e ']' 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.251 [2024-11-29 07:44:16.013135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.251 [2024-11-29 07:44:16.013204] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.251 [2024-11-29 07:44:16.013316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.251 [2024-11-29 07:44:16.013430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.251 [2024-11-29 07:44:16.013496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.251 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.251 [2024-11-29 07:44:16.172849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:26.251 [2024-11-29 07:44:16.174749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:26.251 [2024-11-29 07:44:16.174843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:26.252 [2024-11-29 07:44:16.174899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:26.252 [2024-11-29 07:44:16.174979] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:26.252 [2024-11-29 07:44:16.175089] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:26.252 [2024-11-29 07:44:16.175164] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:26.252 [2024-11-29 07:44:16.175237] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:26.252 [2024-11-29 07:44:16.175253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.252 [2024-11-29 07:44:16.175264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:26.252 request: 00:12:26.252 { 00:12:26.252 "name": "raid_bdev1", 00:12:26.252 "raid_level": "raid1", 00:12:26.252 "base_bdevs": [ 00:12:26.252 "malloc1", 00:12:26.252 "malloc2", 00:12:26.252 "malloc3", 00:12:26.252 "malloc4" 00:12:26.252 ], 00:12:26.252 "superblock": false, 00:12:26.252 "method": "bdev_raid_create", 00:12:26.252 "req_id": 1 00:12:26.252 } 00:12:26.252 Got JSON-RPC error response 00:12:26.252 response: 00:12:26.252 { 00:12:26.252 "code": -17, 00:12:26.252 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:26.252 } 00:12:26.252 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:26.252 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:26.252 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:26.252 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:26.252 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:26.252 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.252 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:26.252 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.252 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.512 [2024-11-29 07:44:16.240702] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:26.512 [2024-11-29 07:44:16.240815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.512 [2024-11-29 07:44:16.240852] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:26.512 [2024-11-29 07:44:16.240896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.512 [2024-11-29 07:44:16.243154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.512 [2024-11-29 07:44:16.243230] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:26.512 [2024-11-29 07:44:16.243332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:26.512 [2024-11-29 07:44:16.243418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:26.512 pt1 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.512 "name": "raid_bdev1", 00:12:26.512 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:26.512 "strip_size_kb": 0, 00:12:26.512 "state": "configuring", 00:12:26.512 "raid_level": "raid1", 00:12:26.512 "superblock": true, 00:12:26.512 "num_base_bdevs": 4, 00:12:26.512 "num_base_bdevs_discovered": 1, 00:12:26.512 "num_base_bdevs_operational": 4, 00:12:26.512 "base_bdevs_list": [ 00:12:26.512 { 00:12:26.512 "name": "pt1", 00:12:26.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.512 "is_configured": true, 00:12:26.512 "data_offset": 2048, 00:12:26.512 "data_size": 63488 00:12:26.512 }, 00:12:26.512 { 00:12:26.512 "name": null, 00:12:26.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.512 "is_configured": false, 00:12:26.512 "data_offset": 2048, 00:12:26.512 "data_size": 63488 00:12:26.512 }, 00:12:26.512 { 00:12:26.512 "name": null, 00:12:26.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.512 "is_configured": false, 00:12:26.512 "data_offset": 2048, 00:12:26.512 "data_size": 63488 00:12:26.512 }, 00:12:26.512 { 00:12:26.512 "name": null, 00:12:26.512 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.512 "is_configured": false, 00:12:26.512 "data_offset": 2048, 00:12:26.512 "data_size": 63488 00:12:26.512 } 00:12:26.512 ] 00:12:26.512 }' 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.512 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.081 [2024-11-29 07:44:16.731913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.081 [2024-11-29 07:44:16.732060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.081 [2024-11-29 07:44:16.732110] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:27.081 [2024-11-29 07:44:16.732154] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.081 [2024-11-29 07:44:16.732618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.081 [2024-11-29 07:44:16.732685] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.081 [2024-11-29 07:44:16.732775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:27.081 [2024-11-29 07:44:16.732803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.081 pt2 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.081 [2024-11-29 07:44:16.743883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.081 "name": "raid_bdev1", 00:12:27.081 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:27.081 "strip_size_kb": 0, 00:12:27.081 "state": "configuring", 00:12:27.081 "raid_level": "raid1", 00:12:27.081 "superblock": true, 00:12:27.081 "num_base_bdevs": 4, 00:12:27.081 "num_base_bdevs_discovered": 1, 00:12:27.081 "num_base_bdevs_operational": 4, 00:12:27.081 "base_bdevs_list": [ 00:12:27.081 { 00:12:27.081 "name": "pt1", 00:12:27.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.081 "is_configured": true, 00:12:27.081 "data_offset": 2048, 00:12:27.081 "data_size": 63488 00:12:27.081 }, 00:12:27.081 { 00:12:27.081 "name": null, 00:12:27.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.081 "is_configured": false, 00:12:27.081 "data_offset": 0, 00:12:27.081 "data_size": 63488 00:12:27.081 }, 00:12:27.081 { 00:12:27.081 "name": null, 00:12:27.081 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.081 "is_configured": false, 00:12:27.081 "data_offset": 2048, 00:12:27.081 "data_size": 63488 00:12:27.081 }, 00:12:27.081 { 00:12:27.081 "name": null, 00:12:27.081 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.081 "is_configured": false, 00:12:27.081 "data_offset": 2048, 00:12:27.081 "data_size": 63488 00:12:27.081 } 00:12:27.081 ] 00:12:27.081 }' 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.081 07:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:27.341 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.341 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.341 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.341 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 [2024-11-29 07:44:17.199132] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.341 [2024-11-29 07:44:17.199260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.342 [2024-11-29 07:44:17.199285] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:27.342 [2024-11-29 07:44:17.199294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.342 [2024-11-29 07:44:17.199786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.342 [2024-11-29 07:44:17.199803] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.342 [2024-11-29 07:44:17.199918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:27.342 [2024-11-29 07:44:17.199941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.342 pt2 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.342 [2024-11-29 07:44:17.211071] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:27.342 [2024-11-29 07:44:17.211150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.342 [2024-11-29 07:44:17.211170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:27.342 [2024-11-29 07:44:17.211178] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.342 [2024-11-29 07:44:17.211551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.342 [2024-11-29 07:44:17.211571] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:27.342 [2024-11-29 07:44:17.211637] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:27.342 [2024-11-29 07:44:17.211656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:27.342 pt3 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.342 [2024-11-29 07:44:17.223017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:27.342 [2024-11-29 07:44:17.223062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.342 [2024-11-29 07:44:17.223077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:27.342 [2024-11-29 07:44:17.223084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.342 [2024-11-29 07:44:17.223460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.342 [2024-11-29 07:44:17.223482] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:27.342 [2024-11-29 07:44:17.223545] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:27.342 [2024-11-29 07:44:17.223568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:27.342 [2024-11-29 07:44:17.223698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:27.342 [2024-11-29 07:44:17.223706] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.342 [2024-11-29 07:44:17.223957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:27.342 [2024-11-29 07:44:17.224126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:27.342 [2024-11-29 07:44:17.224144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:27.342 [2024-11-29 07:44:17.224297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.342 pt4 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.342 "name": "raid_bdev1", 00:12:27.342 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:27.342 "strip_size_kb": 0, 00:12:27.342 "state": "online", 00:12:27.342 "raid_level": "raid1", 00:12:27.342 "superblock": true, 00:12:27.342 "num_base_bdevs": 4, 00:12:27.342 "num_base_bdevs_discovered": 4, 00:12:27.342 "num_base_bdevs_operational": 4, 00:12:27.342 "base_bdevs_list": [ 00:12:27.342 { 00:12:27.342 "name": "pt1", 00:12:27.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.342 "is_configured": true, 00:12:27.342 "data_offset": 2048, 00:12:27.342 "data_size": 63488 00:12:27.342 }, 00:12:27.342 { 00:12:27.342 "name": "pt2", 00:12:27.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.342 "is_configured": true, 00:12:27.342 "data_offset": 2048, 00:12:27.342 "data_size": 63488 00:12:27.342 }, 00:12:27.342 { 00:12:27.342 "name": "pt3", 00:12:27.342 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.342 "is_configured": true, 00:12:27.342 "data_offset": 2048, 00:12:27.342 "data_size": 63488 00:12:27.342 }, 00:12:27.342 { 00:12:27.342 "name": "pt4", 00:12:27.342 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.342 "is_configured": true, 00:12:27.342 "data_offset": 2048, 00:12:27.342 "data_size": 63488 00:12:27.342 } 00:12:27.342 ] 00:12:27.342 }' 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.342 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.912 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:27.912 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:27.912 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:27.912 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:27.912 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:27.912 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:27.912 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.912 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.912 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.912 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:27.912 [2024-11-29 07:44:17.726519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.912 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.912 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.912 "name": "raid_bdev1", 00:12:27.912 "aliases": [ 00:12:27.912 "048c5644-d68f-43cf-b14b-740d24e21b2e" 00:12:27.912 ], 00:12:27.912 "product_name": "Raid Volume", 00:12:27.912 "block_size": 512, 00:12:27.912 "num_blocks": 63488, 00:12:27.912 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:27.912 "assigned_rate_limits": { 00:12:27.912 "rw_ios_per_sec": 0, 00:12:27.912 "rw_mbytes_per_sec": 0, 00:12:27.912 "r_mbytes_per_sec": 0, 00:12:27.912 "w_mbytes_per_sec": 0 00:12:27.912 }, 00:12:27.912 "claimed": false, 00:12:27.912 "zoned": false, 00:12:27.912 "supported_io_types": { 00:12:27.912 "read": true, 00:12:27.912 "write": true, 00:12:27.912 "unmap": false, 00:12:27.912 "flush": false, 00:12:27.912 "reset": true, 00:12:27.912 "nvme_admin": false, 00:12:27.912 "nvme_io": false, 00:12:27.912 "nvme_io_md": false, 00:12:27.912 "write_zeroes": true, 00:12:27.912 "zcopy": false, 00:12:27.912 "get_zone_info": false, 00:12:27.912 "zone_management": false, 00:12:27.912 "zone_append": false, 00:12:27.912 "compare": false, 00:12:27.912 "compare_and_write": false, 00:12:27.912 "abort": false, 00:12:27.912 "seek_hole": false, 00:12:27.912 "seek_data": false, 00:12:27.912 "copy": false, 00:12:27.912 "nvme_iov_md": false 00:12:27.912 }, 00:12:27.912 "memory_domains": [ 00:12:27.912 { 00:12:27.913 "dma_device_id": "system", 00:12:27.913 "dma_device_type": 1 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.913 "dma_device_type": 2 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "dma_device_id": "system", 00:12:27.913 "dma_device_type": 1 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.913 "dma_device_type": 2 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "dma_device_id": "system", 00:12:27.913 "dma_device_type": 1 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.913 "dma_device_type": 2 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "dma_device_id": "system", 00:12:27.913 "dma_device_type": 1 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.913 "dma_device_type": 2 00:12:27.913 } 00:12:27.913 ], 00:12:27.913 "driver_specific": { 00:12:27.913 "raid": { 00:12:27.913 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:27.913 "strip_size_kb": 0, 00:12:27.913 "state": "online", 00:12:27.913 "raid_level": "raid1", 00:12:27.913 "superblock": true, 00:12:27.913 "num_base_bdevs": 4, 00:12:27.913 "num_base_bdevs_discovered": 4, 00:12:27.913 "num_base_bdevs_operational": 4, 00:12:27.913 "base_bdevs_list": [ 00:12:27.913 { 00:12:27.913 "name": "pt1", 00:12:27.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.913 "is_configured": true, 00:12:27.913 "data_offset": 2048, 00:12:27.913 "data_size": 63488 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "name": "pt2", 00:12:27.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.913 "is_configured": true, 00:12:27.913 "data_offset": 2048, 00:12:27.913 "data_size": 63488 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "name": "pt3", 00:12:27.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.913 "is_configured": true, 00:12:27.913 "data_offset": 2048, 00:12:27.913 "data_size": 63488 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "name": "pt4", 00:12:27.913 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.913 "is_configured": true, 00:12:27.913 "data_offset": 2048, 00:12:27.913 "data_size": 63488 00:12:27.913 } 00:12:27.913 ] 00:12:27.913 } 00:12:27.913 } 00:12:27.913 }' 00:12:27.913 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.913 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:27.913 pt2 00:12:27.913 pt3 00:12:27.913 pt4' 00:12:27.913 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.913 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.913 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.173 07:44:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:28.173 [2024-11-29 07:44:18.049888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 048c5644-d68f-43cf-b14b-740d24e21b2e '!=' 048c5644-d68f-43cf-b14b-740d24e21b2e ']' 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.173 [2024-11-29 07:44:18.097566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.173 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.433 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.433 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.433 "name": "raid_bdev1", 00:12:28.433 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:28.433 "strip_size_kb": 0, 00:12:28.433 "state": "online", 00:12:28.433 "raid_level": "raid1", 00:12:28.433 "superblock": true, 00:12:28.433 "num_base_bdevs": 4, 00:12:28.433 "num_base_bdevs_discovered": 3, 00:12:28.433 "num_base_bdevs_operational": 3, 00:12:28.433 "base_bdevs_list": [ 00:12:28.433 { 00:12:28.433 "name": null, 00:12:28.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.433 "is_configured": false, 00:12:28.433 "data_offset": 0, 00:12:28.433 "data_size": 63488 00:12:28.433 }, 00:12:28.433 { 00:12:28.433 "name": "pt2", 00:12:28.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.433 "is_configured": true, 00:12:28.433 "data_offset": 2048, 00:12:28.433 "data_size": 63488 00:12:28.433 }, 00:12:28.433 { 00:12:28.433 "name": "pt3", 00:12:28.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.433 "is_configured": true, 00:12:28.433 "data_offset": 2048, 00:12:28.433 "data_size": 63488 00:12:28.433 }, 00:12:28.433 { 00:12:28.433 "name": "pt4", 00:12:28.433 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.433 "is_configured": true, 00:12:28.433 "data_offset": 2048, 00:12:28.433 "data_size": 63488 00:12:28.433 } 00:12:28.433 ] 00:12:28.433 }' 00:12:28.433 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.433 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.694 [2024-11-29 07:44:18.448953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.694 [2024-11-29 07:44:18.449030] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.694 [2024-11-29 07:44:18.449136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.694 [2024-11-29 07:44:18.449232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.694 [2024-11-29 07:44:18.449277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.694 [2024-11-29 07:44:18.544786] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.694 [2024-11-29 07:44:18.544889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.694 [2024-11-29 07:44:18.544925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:28.694 [2024-11-29 07:44:18.544951] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.694 [2024-11-29 07:44:18.547041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.694 [2024-11-29 07:44:18.547133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.694 [2024-11-29 07:44:18.547235] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:28.694 [2024-11-29 07:44:18.547313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.694 pt2 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.694 "name": "raid_bdev1", 00:12:28.694 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:28.694 "strip_size_kb": 0, 00:12:28.694 "state": "configuring", 00:12:28.694 "raid_level": "raid1", 00:12:28.694 "superblock": true, 00:12:28.694 "num_base_bdevs": 4, 00:12:28.694 "num_base_bdevs_discovered": 1, 00:12:28.694 "num_base_bdevs_operational": 3, 00:12:28.694 "base_bdevs_list": [ 00:12:28.694 { 00:12:28.694 "name": null, 00:12:28.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.694 "is_configured": false, 00:12:28.694 "data_offset": 2048, 00:12:28.694 "data_size": 63488 00:12:28.694 }, 00:12:28.694 { 00:12:28.694 "name": "pt2", 00:12:28.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.694 "is_configured": true, 00:12:28.694 "data_offset": 2048, 00:12:28.694 "data_size": 63488 00:12:28.694 }, 00:12:28.694 { 00:12:28.694 "name": null, 00:12:28.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.694 "is_configured": false, 00:12:28.694 "data_offset": 2048, 00:12:28.694 "data_size": 63488 00:12:28.694 }, 00:12:28.694 { 00:12:28.694 "name": null, 00:12:28.694 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.694 "is_configured": false, 00:12:28.694 "data_offset": 2048, 00:12:28.694 "data_size": 63488 00:12:28.694 } 00:12:28.694 ] 00:12:28.694 }' 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.694 07:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.264 [2024-11-29 07:44:19.012001] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:29.264 [2024-11-29 07:44:19.012111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.264 [2024-11-29 07:44:19.012157] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:29.264 [2024-11-29 07:44:19.012167] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.264 [2024-11-29 07:44:19.012657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.264 [2024-11-29 07:44:19.012675] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:29.264 [2024-11-29 07:44:19.012759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:29.264 [2024-11-29 07:44:19.012780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:29.264 pt3 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.264 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.264 "name": "raid_bdev1", 00:12:29.264 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:29.264 "strip_size_kb": 0, 00:12:29.264 "state": "configuring", 00:12:29.264 "raid_level": "raid1", 00:12:29.264 "superblock": true, 00:12:29.264 "num_base_bdevs": 4, 00:12:29.264 "num_base_bdevs_discovered": 2, 00:12:29.264 "num_base_bdevs_operational": 3, 00:12:29.264 "base_bdevs_list": [ 00:12:29.264 { 00:12:29.264 "name": null, 00:12:29.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.264 "is_configured": false, 00:12:29.264 "data_offset": 2048, 00:12:29.264 "data_size": 63488 00:12:29.264 }, 00:12:29.264 { 00:12:29.264 "name": "pt2", 00:12:29.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.264 "is_configured": true, 00:12:29.264 "data_offset": 2048, 00:12:29.264 "data_size": 63488 00:12:29.264 }, 00:12:29.264 { 00:12:29.265 "name": "pt3", 00:12:29.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.265 "is_configured": true, 00:12:29.265 "data_offset": 2048, 00:12:29.265 "data_size": 63488 00:12:29.265 }, 00:12:29.265 { 00:12:29.265 "name": null, 00:12:29.265 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.265 "is_configured": false, 00:12:29.265 "data_offset": 2048, 00:12:29.265 "data_size": 63488 00:12:29.265 } 00:12:29.265 ] 00:12:29.265 }' 00:12:29.265 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.265 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.524 [2024-11-29 07:44:19.435288] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:29.524 [2024-11-29 07:44:19.435403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.524 [2024-11-29 07:44:19.435435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:29.524 [2024-11-29 07:44:19.435444] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.524 [2024-11-29 07:44:19.435889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.524 [2024-11-29 07:44:19.435907] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:29.524 [2024-11-29 07:44:19.435991] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:29.524 [2024-11-29 07:44:19.436010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:29.524 [2024-11-29 07:44:19.436143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:29.524 [2024-11-29 07:44:19.436152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.524 [2024-11-29 07:44:19.436388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:29.524 [2024-11-29 07:44:19.436532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:29.524 [2024-11-29 07:44:19.436551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:29.524 [2024-11-29 07:44:19.436691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.524 pt4 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.524 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.784 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.784 "name": "raid_bdev1", 00:12:29.784 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:29.784 "strip_size_kb": 0, 00:12:29.784 "state": "online", 00:12:29.784 "raid_level": "raid1", 00:12:29.784 "superblock": true, 00:12:29.784 "num_base_bdevs": 4, 00:12:29.784 "num_base_bdevs_discovered": 3, 00:12:29.784 "num_base_bdevs_operational": 3, 00:12:29.784 "base_bdevs_list": [ 00:12:29.784 { 00:12:29.784 "name": null, 00:12:29.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.784 "is_configured": false, 00:12:29.784 "data_offset": 2048, 00:12:29.784 "data_size": 63488 00:12:29.784 }, 00:12:29.784 { 00:12:29.784 "name": "pt2", 00:12:29.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.784 "is_configured": true, 00:12:29.784 "data_offset": 2048, 00:12:29.784 "data_size": 63488 00:12:29.784 }, 00:12:29.784 { 00:12:29.784 "name": "pt3", 00:12:29.784 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.784 "is_configured": true, 00:12:29.784 "data_offset": 2048, 00:12:29.784 "data_size": 63488 00:12:29.784 }, 00:12:29.784 { 00:12:29.784 "name": "pt4", 00:12:29.784 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.784 "is_configured": true, 00:12:29.784 "data_offset": 2048, 00:12:29.784 "data_size": 63488 00:12:29.784 } 00:12:29.784 ] 00:12:29.784 }' 00:12:29.784 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.784 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.044 [2024-11-29 07:44:19.886452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.044 [2024-11-29 07:44:19.886524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.044 [2024-11-29 07:44:19.886615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.044 [2024-11-29 07:44:19.886696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.044 [2024-11-29 07:44:19.886753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.044 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.044 [2024-11-29 07:44:19.958336] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:30.044 [2024-11-29 07:44:19.958436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.044 [2024-11-29 07:44:19.958473] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:30.044 [2024-11-29 07:44:19.958503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.044 [2024-11-29 07:44:19.960711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.044 [2024-11-29 07:44:19.960791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:30.044 [2024-11-29 07:44:19.960897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:30.044 [2024-11-29 07:44:19.960972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:30.044 [2024-11-29 07:44:19.961158] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:30.044 [2024-11-29 07:44:19.961219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.045 [2024-11-29 07:44:19.961285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:30.045 [2024-11-29 07:44:19.961385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:30.045 [2024-11-29 07:44:19.961526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:30.045 pt1 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.045 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.305 07:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.305 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.305 "name": "raid_bdev1", 00:12:30.305 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:30.305 "strip_size_kb": 0, 00:12:30.305 "state": "configuring", 00:12:30.305 "raid_level": "raid1", 00:12:30.305 "superblock": true, 00:12:30.305 "num_base_bdevs": 4, 00:12:30.305 "num_base_bdevs_discovered": 2, 00:12:30.305 "num_base_bdevs_operational": 3, 00:12:30.305 "base_bdevs_list": [ 00:12:30.305 { 00:12:30.305 "name": null, 00:12:30.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.305 "is_configured": false, 00:12:30.305 "data_offset": 2048, 00:12:30.305 "data_size": 63488 00:12:30.305 }, 00:12:30.305 { 00:12:30.305 "name": "pt2", 00:12:30.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.305 "is_configured": true, 00:12:30.305 "data_offset": 2048, 00:12:30.305 "data_size": 63488 00:12:30.305 }, 00:12:30.305 { 00:12:30.305 "name": "pt3", 00:12:30.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.305 "is_configured": true, 00:12:30.305 "data_offset": 2048, 00:12:30.305 "data_size": 63488 00:12:30.305 }, 00:12:30.305 { 00:12:30.305 "name": null, 00:12:30.305 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.305 "is_configured": false, 00:12:30.305 "data_offset": 2048, 00:12:30.305 "data_size": 63488 00:12:30.305 } 00:12:30.305 ] 00:12:30.305 }' 00:12:30.305 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.305 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.565 [2024-11-29 07:44:20.477480] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:30.565 [2024-11-29 07:44:20.477543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.565 [2024-11-29 07:44:20.477566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:30.565 [2024-11-29 07:44:20.477574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.565 [2024-11-29 07:44:20.478019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.565 [2024-11-29 07:44:20.478050] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:30.565 [2024-11-29 07:44:20.478142] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:30.565 [2024-11-29 07:44:20.478165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:30.565 [2024-11-29 07:44:20.478305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:30.565 [2024-11-29 07:44:20.478314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.565 [2024-11-29 07:44:20.478571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:30.565 [2024-11-29 07:44:20.478715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:30.565 [2024-11-29 07:44:20.478726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:30.565 [2024-11-29 07:44:20.478851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.565 pt4 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.565 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.826 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.826 "name": "raid_bdev1", 00:12:30.826 "uuid": "048c5644-d68f-43cf-b14b-740d24e21b2e", 00:12:30.826 "strip_size_kb": 0, 00:12:30.826 "state": "online", 00:12:30.826 "raid_level": "raid1", 00:12:30.826 "superblock": true, 00:12:30.826 "num_base_bdevs": 4, 00:12:30.826 "num_base_bdevs_discovered": 3, 00:12:30.826 "num_base_bdevs_operational": 3, 00:12:30.826 "base_bdevs_list": [ 00:12:30.826 { 00:12:30.826 "name": null, 00:12:30.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.826 "is_configured": false, 00:12:30.826 "data_offset": 2048, 00:12:30.826 "data_size": 63488 00:12:30.826 }, 00:12:30.826 { 00:12:30.826 "name": "pt2", 00:12:30.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.826 "is_configured": true, 00:12:30.826 "data_offset": 2048, 00:12:30.826 "data_size": 63488 00:12:30.826 }, 00:12:30.826 { 00:12:30.826 "name": "pt3", 00:12:30.826 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.826 "is_configured": true, 00:12:30.826 "data_offset": 2048, 00:12:30.826 "data_size": 63488 00:12:30.826 }, 00:12:30.826 { 00:12:30.826 "name": "pt4", 00:12:30.826 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.826 "is_configured": true, 00:12:30.826 "data_offset": 2048, 00:12:30.826 "data_size": 63488 00:12:30.826 } 00:12:30.826 ] 00:12:30.826 }' 00:12:30.826 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.826 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.086 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:31.086 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.086 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.086 07:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:31.086 07:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.086 07:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:31.086 07:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:31.086 07:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:31.086 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.086 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.086 [2024-11-29 07:44:21.016843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 048c5644-d68f-43cf-b14b-740d24e21b2e '!=' 048c5644-d68f-43cf-b14b-740d24e21b2e ']' 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74286 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74286 ']' 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74286 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74286 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74286' 00:12:31.346 killing process with pid 74286 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74286 00:12:31.346 [2024-11-29 07:44:21.072554] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.346 [2024-11-29 07:44:21.072686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.346 07:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74286 00:12:31.346 [2024-11-29 07:44:21.072788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.346 [2024-11-29 07:44:21.072803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:31.605 [2024-11-29 07:44:21.450667] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.984 07:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:32.984 00:12:32.984 real 0m8.531s 00:12:32.984 user 0m13.537s 00:12:32.984 sys 0m1.499s 00:12:32.984 ************************************ 00:12:32.984 END TEST raid_superblock_test 00:12:32.984 ************************************ 00:12:32.984 07:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.984 07:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.984 07:44:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:32.985 07:44:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:32.985 07:44:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.985 07:44:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.985 ************************************ 00:12:32.985 START TEST raid_read_error_test 00:12:32.985 ************************************ 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wvqrYjAWrv 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74773 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74773 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74773 ']' 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.985 07:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.985 [2024-11-29 07:44:22.704025] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:32.985 [2024-11-29 07:44:22.704537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74773 ] 00:12:32.985 [2024-11-29 07:44:22.879618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.244 [2024-11-29 07:44:22.986181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.244 [2024-11-29 07:44:23.183208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.244 [2024-11-29 07:44:23.183330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.814 BaseBdev1_malloc 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.814 true 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.814 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.814 [2024-11-29 07:44:23.587663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:33.814 [2024-11-29 07:44:23.587756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.815 [2024-11-29 07:44:23.587797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:33.815 [2024-11-29 07:44:23.587807] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.815 [2024-11-29 07:44:23.589828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.815 [2024-11-29 07:44:23.589871] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.815 BaseBdev1 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.815 BaseBdev2_malloc 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.815 true 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.815 [2024-11-29 07:44:23.653170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:33.815 [2024-11-29 07:44:23.653284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.815 [2024-11-29 07:44:23.653305] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:33.815 [2024-11-29 07:44:23.653315] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.815 [2024-11-29 07:44:23.655287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.815 [2024-11-29 07:44:23.655324] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.815 BaseBdev2 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.815 BaseBdev3_malloc 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.815 true 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.815 [2024-11-29 07:44:23.751571] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:33.815 [2024-11-29 07:44:23.751624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.815 [2024-11-29 07:44:23.751641] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:33.815 [2024-11-29 07:44:23.751651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.815 [2024-11-29 07:44:23.753815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.815 [2024-11-29 07:44:23.753893] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:33.815 BaseBdev3 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.815 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:34.075 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:34.075 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.075 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.075 BaseBdev4_malloc 00:12:34.075 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.075 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:34.075 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.075 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.075 true 00:12:34.075 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.076 [2024-11-29 07:44:23.817272] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:34.076 [2024-11-29 07:44:23.817323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.076 [2024-11-29 07:44:23.817341] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:34.076 [2024-11-29 07:44:23.817351] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.076 [2024-11-29 07:44:23.819339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.076 [2024-11-29 07:44:23.819380] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:34.076 BaseBdev4 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.076 [2024-11-29 07:44:23.829312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.076 [2024-11-29 07:44:23.831015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.076 [2024-11-29 07:44:23.831089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.076 [2024-11-29 07:44:23.831159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:34.076 [2024-11-29 07:44:23.831387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:34.076 [2024-11-29 07:44:23.831401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:34.076 [2024-11-29 07:44:23.831623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:34.076 [2024-11-29 07:44:23.831780] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:34.076 [2024-11-29 07:44:23.831790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:34.076 [2024-11-29 07:44:23.831947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.076 "name": "raid_bdev1", 00:12:34.076 "uuid": "9b491e14-33aa-4e06-b1c1-86eae46ab8e0", 00:12:34.076 "strip_size_kb": 0, 00:12:34.076 "state": "online", 00:12:34.076 "raid_level": "raid1", 00:12:34.076 "superblock": true, 00:12:34.076 "num_base_bdevs": 4, 00:12:34.076 "num_base_bdevs_discovered": 4, 00:12:34.076 "num_base_bdevs_operational": 4, 00:12:34.076 "base_bdevs_list": [ 00:12:34.076 { 00:12:34.076 "name": "BaseBdev1", 00:12:34.076 "uuid": "9112b1b5-806c-518c-a012-009707255dee", 00:12:34.076 "is_configured": true, 00:12:34.076 "data_offset": 2048, 00:12:34.076 "data_size": 63488 00:12:34.076 }, 00:12:34.076 { 00:12:34.076 "name": "BaseBdev2", 00:12:34.076 "uuid": "841322d3-2040-51a7-8c31-835070fedfeb", 00:12:34.076 "is_configured": true, 00:12:34.076 "data_offset": 2048, 00:12:34.076 "data_size": 63488 00:12:34.076 }, 00:12:34.076 { 00:12:34.076 "name": "BaseBdev3", 00:12:34.076 "uuid": "980bb38e-b85c-53a7-804f-9e105cb1ad9c", 00:12:34.076 "is_configured": true, 00:12:34.076 "data_offset": 2048, 00:12:34.076 "data_size": 63488 00:12:34.076 }, 00:12:34.076 { 00:12:34.076 "name": "BaseBdev4", 00:12:34.076 "uuid": "ce1f59c7-d694-5543-a84b-b88188dfb90e", 00:12:34.076 "is_configured": true, 00:12:34.076 "data_offset": 2048, 00:12:34.076 "data_size": 63488 00:12:34.076 } 00:12:34.076 ] 00:12:34.076 }' 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.076 07:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.645 07:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:34.645 07:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:34.645 [2024-11-29 07:44:24.377593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.602 "name": "raid_bdev1", 00:12:35.602 "uuid": "9b491e14-33aa-4e06-b1c1-86eae46ab8e0", 00:12:35.602 "strip_size_kb": 0, 00:12:35.602 "state": "online", 00:12:35.602 "raid_level": "raid1", 00:12:35.602 "superblock": true, 00:12:35.602 "num_base_bdevs": 4, 00:12:35.602 "num_base_bdevs_discovered": 4, 00:12:35.602 "num_base_bdevs_operational": 4, 00:12:35.602 "base_bdevs_list": [ 00:12:35.602 { 00:12:35.602 "name": "BaseBdev1", 00:12:35.602 "uuid": "9112b1b5-806c-518c-a012-009707255dee", 00:12:35.602 "is_configured": true, 00:12:35.602 "data_offset": 2048, 00:12:35.602 "data_size": 63488 00:12:35.602 }, 00:12:35.602 { 00:12:35.602 "name": "BaseBdev2", 00:12:35.602 "uuid": "841322d3-2040-51a7-8c31-835070fedfeb", 00:12:35.602 "is_configured": true, 00:12:35.602 "data_offset": 2048, 00:12:35.602 "data_size": 63488 00:12:35.602 }, 00:12:35.602 { 00:12:35.602 "name": "BaseBdev3", 00:12:35.602 "uuid": "980bb38e-b85c-53a7-804f-9e105cb1ad9c", 00:12:35.602 "is_configured": true, 00:12:35.602 "data_offset": 2048, 00:12:35.602 "data_size": 63488 00:12:35.602 }, 00:12:35.602 { 00:12:35.602 "name": "BaseBdev4", 00:12:35.602 "uuid": "ce1f59c7-d694-5543-a84b-b88188dfb90e", 00:12:35.602 "is_configured": true, 00:12:35.602 "data_offset": 2048, 00:12:35.602 "data_size": 63488 00:12:35.602 } 00:12:35.602 ] 00:12:35.602 }' 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.602 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.860 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:35.860 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.860 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.860 [2024-11-29 07:44:25.770101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.860 [2024-11-29 07:44:25.770229] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.860 [2024-11-29 07:44:25.772953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.860 [2024-11-29 07:44:25.773058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.860 [2024-11-29 07:44:25.773187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.860 [2024-11-29 07:44:25.773201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:35.860 { 00:12:35.860 "results": [ 00:12:35.860 { 00:12:35.860 "job": "raid_bdev1", 00:12:35.860 "core_mask": "0x1", 00:12:35.860 "workload": "randrw", 00:12:35.860 "percentage": 50, 00:12:35.860 "status": "finished", 00:12:35.860 "queue_depth": 1, 00:12:35.860 "io_size": 131072, 00:12:35.860 "runtime": 1.393573, 00:12:35.861 "iops": 10778.76795833444, 00:12:35.861 "mibps": 1347.345994791805, 00:12:35.861 "io_failed": 0, 00:12:35.861 "io_timeout": 0, 00:12:35.861 "avg_latency_us": 90.17900517150807, 00:12:35.861 "min_latency_us": 22.46986899563319, 00:12:35.861 "max_latency_us": 1466.6899563318777 00:12:35.861 } 00:12:35.861 ], 00:12:35.861 "core_count": 1 00:12:35.861 } 00:12:35.861 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.861 07:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74773 00:12:35.861 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74773 ']' 00:12:35.861 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74773 00:12:35.861 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:35.861 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.861 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74773 00:12:36.120 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.120 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.120 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74773' 00:12:36.120 killing process with pid 74773 00:12:36.120 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74773 00:12:36.120 [2024-11-29 07:44:25.820117] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:36.121 07:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74773 00:12:36.380 [2024-11-29 07:44:26.131389] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.319 07:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:37.319 07:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wvqrYjAWrv 00:12:37.579 07:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:37.579 07:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:37.579 07:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:37.579 07:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:37.579 07:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:37.579 ************************************ 00:12:37.579 END TEST raid_read_error_test 00:12:37.579 ************************************ 00:12:37.579 07:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:37.579 00:12:37.579 real 0m4.675s 00:12:37.579 user 0m5.554s 00:12:37.579 sys 0m0.569s 00:12:37.579 07:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.579 07:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.579 07:44:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:37.579 07:44:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:37.579 07:44:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.579 07:44:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.579 ************************************ 00:12:37.579 START TEST raid_write_error_test 00:12:37.579 ************************************ 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mV6A4SSznD 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74919 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74919 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74919 ']' 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.579 07:44:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.579 [2024-11-29 07:44:27.451689] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:37.579 [2024-11-29 07:44:27.451806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74919 ] 00:12:37.839 [2024-11-29 07:44:27.622036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.839 [2024-11-29 07:44:27.729325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.099 [2024-11-29 07:44:27.925408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.099 [2024-11-29 07:44:27.925496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.360 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.360 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:38.360 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.360 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.360 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.360 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.619 BaseBdev1_malloc 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.619 true 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.619 [2024-11-29 07:44:28.338513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:38.619 [2024-11-29 07:44:28.338568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.619 [2024-11-29 07:44:28.338586] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:38.619 [2024-11-29 07:44:28.338596] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.619 [2024-11-29 07:44:28.340642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.619 [2024-11-29 07:44:28.340685] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.619 BaseBdev1 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.619 BaseBdev2_malloc 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.619 true 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.619 [2024-11-29 07:44:28.403356] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:38.619 [2024-11-29 07:44:28.403408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.619 [2024-11-29 07:44:28.403423] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:38.619 [2024-11-29 07:44:28.403433] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.619 [2024-11-29 07:44:28.405453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.619 [2024-11-29 07:44:28.405503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:38.619 BaseBdev2 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.619 BaseBdev3_malloc 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.619 true 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:38.619 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.620 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.620 [2024-11-29 07:44:28.506039] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:38.620 [2024-11-29 07:44:28.506089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.620 [2024-11-29 07:44:28.506115] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:38.620 [2024-11-29 07:44:28.506126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.620 [2024-11-29 07:44:28.508150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.620 [2024-11-29 07:44:28.508250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:38.620 BaseBdev3 00:12:38.620 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.620 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.620 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:38.620 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.620 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.620 BaseBdev4_malloc 00:12:38.620 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.620 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:38.620 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.620 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.878 true 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.878 [2024-11-29 07:44:28.570791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:38.878 [2024-11-29 07:44:28.570887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.878 [2024-11-29 07:44:28.570908] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:38.878 [2024-11-29 07:44:28.570919] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.878 [2024-11-29 07:44:28.573002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.878 [2024-11-29 07:44:28.573046] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:38.878 BaseBdev4 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.878 [2024-11-29 07:44:28.582826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.878 [2024-11-29 07:44:28.584709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.878 [2024-11-29 07:44:28.584796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.878 [2024-11-29 07:44:28.584855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:38.878 [2024-11-29 07:44:28.585096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:38.878 [2024-11-29 07:44:28.585121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.878 [2024-11-29 07:44:28.585360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:38.878 [2024-11-29 07:44:28.585523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:38.878 [2024-11-29 07:44:28.585538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:38.878 [2024-11-29 07:44:28.585715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.878 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.879 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.879 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.879 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.879 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.879 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.879 "name": "raid_bdev1", 00:12:38.879 "uuid": "58ebd4a2-1e4c-4fcf-8d9b-0b3e01857d44", 00:12:38.879 "strip_size_kb": 0, 00:12:38.879 "state": "online", 00:12:38.879 "raid_level": "raid1", 00:12:38.879 "superblock": true, 00:12:38.879 "num_base_bdevs": 4, 00:12:38.879 "num_base_bdevs_discovered": 4, 00:12:38.879 "num_base_bdevs_operational": 4, 00:12:38.879 "base_bdevs_list": [ 00:12:38.879 { 00:12:38.879 "name": "BaseBdev1", 00:12:38.879 "uuid": "4ba71508-2028-5de9-8da7-3519b0543e04", 00:12:38.879 "is_configured": true, 00:12:38.879 "data_offset": 2048, 00:12:38.879 "data_size": 63488 00:12:38.879 }, 00:12:38.879 { 00:12:38.879 "name": "BaseBdev2", 00:12:38.879 "uuid": "b56e8eec-472f-56ff-b50f-6d7e5afb9dd4", 00:12:38.879 "is_configured": true, 00:12:38.879 "data_offset": 2048, 00:12:38.879 "data_size": 63488 00:12:38.879 }, 00:12:38.879 { 00:12:38.879 "name": "BaseBdev3", 00:12:38.879 "uuid": "3084262a-b7dc-5441-9127-37153fc2b297", 00:12:38.879 "is_configured": true, 00:12:38.879 "data_offset": 2048, 00:12:38.879 "data_size": 63488 00:12:38.879 }, 00:12:38.879 { 00:12:38.879 "name": "BaseBdev4", 00:12:38.879 "uuid": "b9184b0d-bbc3-52c1-b5db-532bfa0e6b4a", 00:12:38.879 "is_configured": true, 00:12:38.879 "data_offset": 2048, 00:12:38.879 "data_size": 63488 00:12:38.879 } 00:12:38.879 ] 00:12:38.879 }' 00:12:38.879 07:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.879 07:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.137 07:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:39.137 07:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:39.397 [2024-11-29 07:44:29.139184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.337 [2024-11-29 07:44:30.057487] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:40.337 [2024-11-29 07:44:30.057548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:40.337 [2024-11-29 07:44:30.057774] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.337 "name": "raid_bdev1", 00:12:40.337 "uuid": "58ebd4a2-1e4c-4fcf-8d9b-0b3e01857d44", 00:12:40.337 "strip_size_kb": 0, 00:12:40.337 "state": "online", 00:12:40.337 "raid_level": "raid1", 00:12:40.337 "superblock": true, 00:12:40.337 "num_base_bdevs": 4, 00:12:40.337 "num_base_bdevs_discovered": 3, 00:12:40.337 "num_base_bdevs_operational": 3, 00:12:40.337 "base_bdevs_list": [ 00:12:40.337 { 00:12:40.337 "name": null, 00:12:40.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.337 "is_configured": false, 00:12:40.337 "data_offset": 0, 00:12:40.337 "data_size": 63488 00:12:40.337 }, 00:12:40.337 { 00:12:40.337 "name": "BaseBdev2", 00:12:40.337 "uuid": "b56e8eec-472f-56ff-b50f-6d7e5afb9dd4", 00:12:40.337 "is_configured": true, 00:12:40.337 "data_offset": 2048, 00:12:40.337 "data_size": 63488 00:12:40.337 }, 00:12:40.337 { 00:12:40.337 "name": "BaseBdev3", 00:12:40.337 "uuid": "3084262a-b7dc-5441-9127-37153fc2b297", 00:12:40.337 "is_configured": true, 00:12:40.337 "data_offset": 2048, 00:12:40.337 "data_size": 63488 00:12:40.337 }, 00:12:40.337 { 00:12:40.337 "name": "BaseBdev4", 00:12:40.337 "uuid": "b9184b0d-bbc3-52c1-b5db-532bfa0e6b4a", 00:12:40.337 "is_configured": true, 00:12:40.337 "data_offset": 2048, 00:12:40.337 "data_size": 63488 00:12:40.337 } 00:12:40.337 ] 00:12:40.337 }' 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.337 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.598 [2024-11-29 07:44:30.472861] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.598 [2024-11-29 07:44:30.472969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.598 [2024-11-29 07:44:30.475752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.598 [2024-11-29 07:44:30.475848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.598 [2024-11-29 07:44:30.475990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.598 [2024-11-29 07:44:30.476041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:40.598 { 00:12:40.598 "results": [ 00:12:40.598 { 00:12:40.598 "job": "raid_bdev1", 00:12:40.598 "core_mask": "0x1", 00:12:40.598 "workload": "randrw", 00:12:40.598 "percentage": 50, 00:12:40.598 "status": "finished", 00:12:40.598 "queue_depth": 1, 00:12:40.598 "io_size": 131072, 00:12:40.598 "runtime": 1.334582, 00:12:40.598 "iops": 11668.072849776185, 00:12:40.598 "mibps": 1458.509106222023, 00:12:40.598 "io_failed": 0, 00:12:40.598 "io_timeout": 0, 00:12:40.598 "avg_latency_us": 83.11167395964317, 00:12:40.598 "min_latency_us": 22.358078602620086, 00:12:40.598 "max_latency_us": 1488.1537117903931 00:12:40.598 } 00:12:40.598 ], 00:12:40.598 "core_count": 1 00:12:40.598 } 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74919 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74919 ']' 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74919 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74919 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74919' 00:12:40.598 killing process with pid 74919 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74919 00:12:40.598 [2024-11-29 07:44:30.525168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.598 07:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74919 00:12:41.168 [2024-11-29 07:44:30.834236] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.121 07:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mV6A4SSznD 00:12:42.121 07:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:42.121 07:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:42.121 07:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:42.121 07:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:42.121 07:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:42.121 07:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:42.121 07:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:42.121 00:12:42.121 real 0m4.627s 00:12:42.121 user 0m5.441s 00:12:42.121 sys 0m0.573s 00:12:42.121 07:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.121 07:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.121 ************************************ 00:12:42.121 END TEST raid_write_error_test 00:12:42.121 ************************************ 00:12:42.121 07:44:32 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:42.121 07:44:32 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:42.121 07:44:32 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:42.121 07:44:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:42.121 07:44:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.121 07:44:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.121 ************************************ 00:12:42.121 START TEST raid_rebuild_test 00:12:42.121 ************************************ 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:42.121 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:42.388 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75057 00:12:42.388 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:42.388 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75057 00:12:42.388 07:44:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75057 ']' 00:12:42.388 07:44:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.388 07:44:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.388 07:44:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.388 07:44:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.388 07:44:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.388 [2024-11-29 07:44:32.145512] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:42.388 [2024-11-29 07:44:32.145722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:42.388 Zero copy mechanism will not be used. 00:12:42.388 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75057 ] 00:12:42.388 [2024-11-29 07:44:32.315816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.647 [2024-11-29 07:44:32.427042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.907 [2024-11-29 07:44:32.613176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.907 [2024-11-29 07:44:32.613314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.168 07:44:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.168 07:44:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:43.168 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.168 07:44:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:43.168 07:44:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.168 07:44:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.168 BaseBdev1_malloc 00:12:43.168 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.168 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:43.168 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.168 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.168 [2024-11-29 07:44:33.019729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:43.168 [2024-11-29 07:44:33.019792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.168 [2024-11-29 07:44:33.019815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:43.168 [2024-11-29 07:44:33.019833] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.168 [2024-11-29 07:44:33.021933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.168 [2024-11-29 07:44:33.021976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:43.168 BaseBdev1 00:12:43.168 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.168 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.168 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:43.168 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.168 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.168 BaseBdev2_malloc 00:12:43.168 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.169 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:43.169 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.169 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.169 [2024-11-29 07:44:33.074190] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:43.169 [2024-11-29 07:44:33.074249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.169 [2024-11-29 07:44:33.074272] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:43.169 [2024-11-29 07:44:33.074284] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.169 [2024-11-29 07:44:33.076258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.169 [2024-11-29 07:44:33.076299] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:43.169 BaseBdev2 00:12:43.169 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.169 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:43.169 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.169 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.429 spare_malloc 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.429 spare_delay 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.429 [2024-11-29 07:44:33.153170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:43.429 [2024-11-29 07:44:33.153301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.429 [2024-11-29 07:44:33.153327] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:43.429 [2024-11-29 07:44:33.153339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.429 [2024-11-29 07:44:33.155474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.429 [2024-11-29 07:44:33.155527] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:43.429 spare 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.429 [2024-11-29 07:44:33.165194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.429 [2024-11-29 07:44:33.166896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.429 [2024-11-29 07:44:33.166982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:43.429 [2024-11-29 07:44:33.166995] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:43.429 [2024-11-29 07:44:33.167236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:43.429 [2024-11-29 07:44:33.167392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:43.429 [2024-11-29 07:44:33.167405] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:43.429 [2024-11-29 07:44:33.167552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.429 "name": "raid_bdev1", 00:12:43.429 "uuid": "f575a3ae-2bcc-4490-900d-75cbc1d98a2f", 00:12:43.429 "strip_size_kb": 0, 00:12:43.429 "state": "online", 00:12:43.429 "raid_level": "raid1", 00:12:43.429 "superblock": false, 00:12:43.429 "num_base_bdevs": 2, 00:12:43.429 "num_base_bdevs_discovered": 2, 00:12:43.429 "num_base_bdevs_operational": 2, 00:12:43.429 "base_bdevs_list": [ 00:12:43.429 { 00:12:43.429 "name": "BaseBdev1", 00:12:43.429 "uuid": "eaef5136-5dd1-5374-bee5-dfe371cb71e4", 00:12:43.429 "is_configured": true, 00:12:43.429 "data_offset": 0, 00:12:43.429 "data_size": 65536 00:12:43.429 }, 00:12:43.429 { 00:12:43.429 "name": "BaseBdev2", 00:12:43.429 "uuid": "1598c057-b8b2-5e08-9082-5ebc848f7936", 00:12:43.429 "is_configured": true, 00:12:43.429 "data_offset": 0, 00:12:43.429 "data_size": 65536 00:12:43.429 } 00:12:43.429 ] 00:12:43.429 }' 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.429 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.690 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:43.690 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.690 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.690 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:43.690 [2024-11-29 07:44:33.632662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:43.950 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:43.950 [2024-11-29 07:44:33.880042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:44.211 /dev/nbd0 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.211 1+0 records in 00:12:44.211 1+0 records out 00:12:44.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390961 s, 10.5 MB/s 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:44.211 07:44:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:48.407 65536+0 records in 00:12:48.407 65536+0 records out 00:12:48.407 33554432 bytes (34 MB, 32 MiB) copied, 3.77314 s, 8.9 MB/s 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:48.407 [2024-11-29 07:44:37.939501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.407 [2024-11-29 07:44:37.954155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.407 "name": "raid_bdev1", 00:12:48.407 "uuid": "f575a3ae-2bcc-4490-900d-75cbc1d98a2f", 00:12:48.407 "strip_size_kb": 0, 00:12:48.407 "state": "online", 00:12:48.407 "raid_level": "raid1", 00:12:48.407 "superblock": false, 00:12:48.407 "num_base_bdevs": 2, 00:12:48.407 "num_base_bdevs_discovered": 1, 00:12:48.407 "num_base_bdevs_operational": 1, 00:12:48.407 "base_bdevs_list": [ 00:12:48.407 { 00:12:48.407 "name": null, 00:12:48.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.407 "is_configured": false, 00:12:48.407 "data_offset": 0, 00:12:48.407 "data_size": 65536 00:12:48.407 }, 00:12:48.407 { 00:12:48.407 "name": "BaseBdev2", 00:12:48.407 "uuid": "1598c057-b8b2-5e08-9082-5ebc848f7936", 00:12:48.407 "is_configured": true, 00:12:48.407 "data_offset": 0, 00:12:48.407 "data_size": 65536 00:12:48.407 } 00:12:48.407 ] 00:12:48.407 }' 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.407 07:44:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.407 07:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:48.407 07:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.407 07:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.407 [2024-11-29 07:44:38.345451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:48.667 [2024-11-29 07:44:38.361665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:48.667 07:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.667 07:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:48.667 [2024-11-29 07:44:38.363601] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.607 "name": "raid_bdev1", 00:12:49.607 "uuid": "f575a3ae-2bcc-4490-900d-75cbc1d98a2f", 00:12:49.607 "strip_size_kb": 0, 00:12:49.607 "state": "online", 00:12:49.607 "raid_level": "raid1", 00:12:49.607 "superblock": false, 00:12:49.607 "num_base_bdevs": 2, 00:12:49.607 "num_base_bdevs_discovered": 2, 00:12:49.607 "num_base_bdevs_operational": 2, 00:12:49.607 "process": { 00:12:49.607 "type": "rebuild", 00:12:49.607 "target": "spare", 00:12:49.607 "progress": { 00:12:49.607 "blocks": 20480, 00:12:49.607 "percent": 31 00:12:49.607 } 00:12:49.607 }, 00:12:49.607 "base_bdevs_list": [ 00:12:49.607 { 00:12:49.607 "name": "spare", 00:12:49.607 "uuid": "3cbd0c07-bba8-54a5-a698-ef2be81d1d1b", 00:12:49.607 "is_configured": true, 00:12:49.607 "data_offset": 0, 00:12:49.607 "data_size": 65536 00:12:49.607 }, 00:12:49.607 { 00:12:49.607 "name": "BaseBdev2", 00:12:49.607 "uuid": "1598c057-b8b2-5e08-9082-5ebc848f7936", 00:12:49.607 "is_configured": true, 00:12:49.607 "data_offset": 0, 00:12:49.607 "data_size": 65536 00:12:49.607 } 00:12:49.607 ] 00:12:49.607 }' 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.607 07:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.607 [2024-11-29 07:44:39.539178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.867 [2024-11-29 07:44:39.568694] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:49.867 [2024-11-29 07:44:39.568751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.867 [2024-11-29 07:44:39.568766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.867 [2024-11-29 07:44:39.568776] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.867 "name": "raid_bdev1", 00:12:49.867 "uuid": "f575a3ae-2bcc-4490-900d-75cbc1d98a2f", 00:12:49.867 "strip_size_kb": 0, 00:12:49.867 "state": "online", 00:12:49.867 "raid_level": "raid1", 00:12:49.867 "superblock": false, 00:12:49.867 "num_base_bdevs": 2, 00:12:49.867 "num_base_bdevs_discovered": 1, 00:12:49.867 "num_base_bdevs_operational": 1, 00:12:49.867 "base_bdevs_list": [ 00:12:49.867 { 00:12:49.867 "name": null, 00:12:49.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.867 "is_configured": false, 00:12:49.867 "data_offset": 0, 00:12:49.867 "data_size": 65536 00:12:49.867 }, 00:12:49.867 { 00:12:49.867 "name": "BaseBdev2", 00:12:49.867 "uuid": "1598c057-b8b2-5e08-9082-5ebc848f7936", 00:12:49.867 "is_configured": true, 00:12:49.867 "data_offset": 0, 00:12:49.867 "data_size": 65536 00:12:49.867 } 00:12:49.867 ] 00:12:49.867 }' 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.867 07:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.127 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.127 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.127 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.127 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.127 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.127 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.127 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.127 07:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.127 07:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.127 07:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.386 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.386 "name": "raid_bdev1", 00:12:50.386 "uuid": "f575a3ae-2bcc-4490-900d-75cbc1d98a2f", 00:12:50.386 "strip_size_kb": 0, 00:12:50.386 "state": "online", 00:12:50.386 "raid_level": "raid1", 00:12:50.386 "superblock": false, 00:12:50.386 "num_base_bdevs": 2, 00:12:50.386 "num_base_bdevs_discovered": 1, 00:12:50.386 "num_base_bdevs_operational": 1, 00:12:50.386 "base_bdevs_list": [ 00:12:50.386 { 00:12:50.386 "name": null, 00:12:50.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.386 "is_configured": false, 00:12:50.386 "data_offset": 0, 00:12:50.386 "data_size": 65536 00:12:50.386 }, 00:12:50.386 { 00:12:50.386 "name": "BaseBdev2", 00:12:50.386 "uuid": "1598c057-b8b2-5e08-9082-5ebc848f7936", 00:12:50.386 "is_configured": true, 00:12:50.386 "data_offset": 0, 00:12:50.386 "data_size": 65536 00:12:50.386 } 00:12:50.386 ] 00:12:50.386 }' 00:12:50.386 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.386 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.386 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.386 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.386 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:50.386 07:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.386 07:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.386 [2024-11-29 07:44:40.187027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:50.386 [2024-11-29 07:44:40.202827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:50.386 07:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.386 07:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:50.386 [2024-11-29 07:44:40.204735] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:51.323 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.323 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.323 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.323 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.323 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.323 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.323 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.323 07:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.323 07:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.323 07:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.323 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.323 "name": "raid_bdev1", 00:12:51.323 "uuid": "f575a3ae-2bcc-4490-900d-75cbc1d98a2f", 00:12:51.323 "strip_size_kb": 0, 00:12:51.323 "state": "online", 00:12:51.323 "raid_level": "raid1", 00:12:51.323 "superblock": false, 00:12:51.323 "num_base_bdevs": 2, 00:12:51.323 "num_base_bdevs_discovered": 2, 00:12:51.323 "num_base_bdevs_operational": 2, 00:12:51.323 "process": { 00:12:51.323 "type": "rebuild", 00:12:51.323 "target": "spare", 00:12:51.323 "progress": { 00:12:51.323 "blocks": 20480, 00:12:51.323 "percent": 31 00:12:51.323 } 00:12:51.323 }, 00:12:51.323 "base_bdevs_list": [ 00:12:51.323 { 00:12:51.323 "name": "spare", 00:12:51.323 "uuid": "3cbd0c07-bba8-54a5-a698-ef2be81d1d1b", 00:12:51.323 "is_configured": true, 00:12:51.323 "data_offset": 0, 00:12:51.323 "data_size": 65536 00:12:51.323 }, 00:12:51.323 { 00:12:51.323 "name": "BaseBdev2", 00:12:51.323 "uuid": "1598c057-b8b2-5e08-9082-5ebc848f7936", 00:12:51.323 "is_configured": true, 00:12:51.323 "data_offset": 0, 00:12:51.323 "data_size": 65536 00:12:51.323 } 00:12:51.323 ] 00:12:51.323 }' 00:12:51.323 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=360 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.583 "name": "raid_bdev1", 00:12:51.583 "uuid": "f575a3ae-2bcc-4490-900d-75cbc1d98a2f", 00:12:51.583 "strip_size_kb": 0, 00:12:51.583 "state": "online", 00:12:51.583 "raid_level": "raid1", 00:12:51.583 "superblock": false, 00:12:51.583 "num_base_bdevs": 2, 00:12:51.583 "num_base_bdevs_discovered": 2, 00:12:51.583 "num_base_bdevs_operational": 2, 00:12:51.583 "process": { 00:12:51.583 "type": "rebuild", 00:12:51.583 "target": "spare", 00:12:51.583 "progress": { 00:12:51.583 "blocks": 22528, 00:12:51.583 "percent": 34 00:12:51.583 } 00:12:51.583 }, 00:12:51.583 "base_bdevs_list": [ 00:12:51.583 { 00:12:51.583 "name": "spare", 00:12:51.583 "uuid": "3cbd0c07-bba8-54a5-a698-ef2be81d1d1b", 00:12:51.583 "is_configured": true, 00:12:51.583 "data_offset": 0, 00:12:51.583 "data_size": 65536 00:12:51.583 }, 00:12:51.583 { 00:12:51.583 "name": "BaseBdev2", 00:12:51.583 "uuid": "1598c057-b8b2-5e08-9082-5ebc848f7936", 00:12:51.583 "is_configured": true, 00:12:51.583 "data_offset": 0, 00:12:51.583 "data_size": 65536 00:12:51.583 } 00:12:51.583 ] 00:12:51.583 }' 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.583 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.584 07:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.976 "name": "raid_bdev1", 00:12:52.976 "uuid": "f575a3ae-2bcc-4490-900d-75cbc1d98a2f", 00:12:52.976 "strip_size_kb": 0, 00:12:52.976 "state": "online", 00:12:52.976 "raid_level": "raid1", 00:12:52.976 "superblock": false, 00:12:52.976 "num_base_bdevs": 2, 00:12:52.976 "num_base_bdevs_discovered": 2, 00:12:52.976 "num_base_bdevs_operational": 2, 00:12:52.976 "process": { 00:12:52.976 "type": "rebuild", 00:12:52.976 "target": "spare", 00:12:52.976 "progress": { 00:12:52.976 "blocks": 45056, 00:12:52.976 "percent": 68 00:12:52.976 } 00:12:52.976 }, 00:12:52.976 "base_bdevs_list": [ 00:12:52.976 { 00:12:52.976 "name": "spare", 00:12:52.976 "uuid": "3cbd0c07-bba8-54a5-a698-ef2be81d1d1b", 00:12:52.976 "is_configured": true, 00:12:52.976 "data_offset": 0, 00:12:52.976 "data_size": 65536 00:12:52.976 }, 00:12:52.976 { 00:12:52.976 "name": "BaseBdev2", 00:12:52.976 "uuid": "1598c057-b8b2-5e08-9082-5ebc848f7936", 00:12:52.976 "is_configured": true, 00:12:52.976 "data_offset": 0, 00:12:52.976 "data_size": 65536 00:12:52.976 } 00:12:52.976 ] 00:12:52.976 }' 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.976 07:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.544 [2024-11-29 07:44:43.417634] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:53.544 [2024-11-29 07:44:43.417784] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:53.544 [2024-11-29 07:44:43.417855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.804 "name": "raid_bdev1", 00:12:53.804 "uuid": "f575a3ae-2bcc-4490-900d-75cbc1d98a2f", 00:12:53.804 "strip_size_kb": 0, 00:12:53.804 "state": "online", 00:12:53.804 "raid_level": "raid1", 00:12:53.804 "superblock": false, 00:12:53.804 "num_base_bdevs": 2, 00:12:53.804 "num_base_bdevs_discovered": 2, 00:12:53.804 "num_base_bdevs_operational": 2, 00:12:53.804 "base_bdevs_list": [ 00:12:53.804 { 00:12:53.804 "name": "spare", 00:12:53.804 "uuid": "3cbd0c07-bba8-54a5-a698-ef2be81d1d1b", 00:12:53.804 "is_configured": true, 00:12:53.804 "data_offset": 0, 00:12:53.804 "data_size": 65536 00:12:53.804 }, 00:12:53.804 { 00:12:53.804 "name": "BaseBdev2", 00:12:53.804 "uuid": "1598c057-b8b2-5e08-9082-5ebc848f7936", 00:12:53.804 "is_configured": true, 00:12:53.804 "data_offset": 0, 00:12:53.804 "data_size": 65536 00:12:53.804 } 00:12:53.804 ] 00:12:53.804 }' 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:53.804 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.064 "name": "raid_bdev1", 00:12:54.064 "uuid": "f575a3ae-2bcc-4490-900d-75cbc1d98a2f", 00:12:54.064 "strip_size_kb": 0, 00:12:54.064 "state": "online", 00:12:54.064 "raid_level": "raid1", 00:12:54.064 "superblock": false, 00:12:54.064 "num_base_bdevs": 2, 00:12:54.064 "num_base_bdevs_discovered": 2, 00:12:54.064 "num_base_bdevs_operational": 2, 00:12:54.064 "base_bdevs_list": [ 00:12:54.064 { 00:12:54.064 "name": "spare", 00:12:54.064 "uuid": "3cbd0c07-bba8-54a5-a698-ef2be81d1d1b", 00:12:54.064 "is_configured": true, 00:12:54.064 "data_offset": 0, 00:12:54.064 "data_size": 65536 00:12:54.064 }, 00:12:54.064 { 00:12:54.064 "name": "BaseBdev2", 00:12:54.064 "uuid": "1598c057-b8b2-5e08-9082-5ebc848f7936", 00:12:54.064 "is_configured": true, 00:12:54.064 "data_offset": 0, 00:12:54.064 "data_size": 65536 00:12:54.064 } 00:12:54.064 ] 00:12:54.064 }' 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.064 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.065 "name": "raid_bdev1", 00:12:54.065 "uuid": "f575a3ae-2bcc-4490-900d-75cbc1d98a2f", 00:12:54.065 "strip_size_kb": 0, 00:12:54.065 "state": "online", 00:12:54.065 "raid_level": "raid1", 00:12:54.065 "superblock": false, 00:12:54.065 "num_base_bdevs": 2, 00:12:54.065 "num_base_bdevs_discovered": 2, 00:12:54.065 "num_base_bdevs_operational": 2, 00:12:54.065 "base_bdevs_list": [ 00:12:54.065 { 00:12:54.065 "name": "spare", 00:12:54.065 "uuid": "3cbd0c07-bba8-54a5-a698-ef2be81d1d1b", 00:12:54.065 "is_configured": true, 00:12:54.065 "data_offset": 0, 00:12:54.065 "data_size": 65536 00:12:54.065 }, 00:12:54.065 { 00:12:54.065 "name": "BaseBdev2", 00:12:54.065 "uuid": "1598c057-b8b2-5e08-9082-5ebc848f7936", 00:12:54.065 "is_configured": true, 00:12:54.065 "data_offset": 0, 00:12:54.065 "data_size": 65536 00:12:54.065 } 00:12:54.065 ] 00:12:54.065 }' 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.065 07:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.635 [2024-11-29 07:44:44.406625] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:54.635 [2024-11-29 07:44:44.406708] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.635 [2024-11-29 07:44:44.406811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.635 [2024-11-29 07:44:44.406912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.635 [2024-11-29 07:44:44.406966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:54.635 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:54.895 /dev/nbd0 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.895 1+0 records in 00:12:54.895 1+0 records out 00:12:54.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357031 s, 11.5 MB/s 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:54.895 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:55.155 /dev/nbd1 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.156 1+0 records in 00:12:55.156 1+0 records out 00:12:55.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599143 s, 6.8 MB/s 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:55.156 07:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.415 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75057 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75057 ']' 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75057 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75057 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75057' 00:12:55.675 killing process with pid 75057 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75057 00:12:55.675 Received shutdown signal, test time was about 60.000000 seconds 00:12:55.675 00:12:55.675 Latency(us) 00:12:55.675 [2024-11-29T07:44:45.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.675 [2024-11-29T07:44:45.620Z] =================================================================================================================== 00:12:55.675 [2024-11-29T07:44:45.620Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:55.675 [2024-11-29 07:44:45.583196] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.675 07:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75057 00:12:55.935 [2024-11-29 07:44:45.874110] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:57.314 07:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:57.314 00:12:57.314 real 0m14.920s 00:12:57.314 user 0m17.081s 00:12:57.314 sys 0m2.846s 00:12:57.314 ************************************ 00:12:57.314 END TEST raid_rebuild_test 00:12:57.314 ************************************ 00:12:57.314 07:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.314 07:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.314 07:44:47 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:57.314 07:44:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:57.314 07:44:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.314 07:44:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:57.314 ************************************ 00:12:57.314 START TEST raid_rebuild_test_sb 00:12:57.314 ************************************ 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75475 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75475 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75475 ']' 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.314 07:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.314 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:57.314 Zero copy mechanism will not be used. 00:12:57.314 [2024-11-29 07:44:47.139212] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:57.314 [2024-11-29 07:44:47.139330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75475 ] 00:12:57.573 [2024-11-29 07:44:47.295963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.573 [2024-11-29 07:44:47.404529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.832 [2024-11-29 07:44:47.595559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.832 [2024-11-29 07:44:47.595699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.092 07:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.092 07:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:58.092 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:58.092 07:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:58.092 07:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.092 07:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.092 BaseBdev1_malloc 00:12:58.092 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.092 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:58.092 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.092 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.092 [2024-11-29 07:44:48.011137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:58.092 [2024-11-29 07:44:48.011194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.092 [2024-11-29 07:44:48.011217] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:58.092 [2024-11-29 07:44:48.011228] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.092 [2024-11-29 07:44:48.013319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.092 [2024-11-29 07:44:48.013362] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:58.092 BaseBdev1 00:12:58.092 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.092 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:58.092 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:58.092 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.092 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.352 BaseBdev2_malloc 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.352 [2024-11-29 07:44:48.066589] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:58.352 [2024-11-29 07:44:48.066650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.352 [2024-11-29 07:44:48.066671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:58.352 [2024-11-29 07:44:48.066682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.352 [2024-11-29 07:44:48.068722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.352 [2024-11-29 07:44:48.068764] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:58.352 BaseBdev2 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.352 spare_malloc 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.352 spare_delay 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.352 [2024-11-29 07:44:48.147725] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:58.352 [2024-11-29 07:44:48.147786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.352 [2024-11-29 07:44:48.147805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:58.352 [2024-11-29 07:44:48.147822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.352 [2024-11-29 07:44:48.149907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.352 [2024-11-29 07:44:48.150038] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:58.352 spare 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.352 [2024-11-29 07:44:48.159758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.352 [2024-11-29 07:44:48.161549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.352 [2024-11-29 07:44:48.161724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:58.352 [2024-11-29 07:44:48.161740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:58.352 [2024-11-29 07:44:48.161991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:58.352 [2024-11-29 07:44:48.162164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:58.352 [2024-11-29 07:44:48.162174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:58.352 [2024-11-29 07:44:48.162317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.352 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.352 "name": "raid_bdev1", 00:12:58.352 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:12:58.353 "strip_size_kb": 0, 00:12:58.353 "state": "online", 00:12:58.353 "raid_level": "raid1", 00:12:58.353 "superblock": true, 00:12:58.353 "num_base_bdevs": 2, 00:12:58.353 "num_base_bdevs_discovered": 2, 00:12:58.353 "num_base_bdevs_operational": 2, 00:12:58.353 "base_bdevs_list": [ 00:12:58.353 { 00:12:58.353 "name": "BaseBdev1", 00:12:58.353 "uuid": "6f257955-6ead-5c06-9bf6-2c957d6b479b", 00:12:58.353 "is_configured": true, 00:12:58.353 "data_offset": 2048, 00:12:58.353 "data_size": 63488 00:12:58.353 }, 00:12:58.353 { 00:12:58.353 "name": "BaseBdev2", 00:12:58.353 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:12:58.353 "is_configured": true, 00:12:58.353 "data_offset": 2048, 00:12:58.353 "data_size": 63488 00:12:58.353 } 00:12:58.353 ] 00:12:58.353 }' 00:12:58.353 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.353 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.923 [2024-11-29 07:44:48.567349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.923 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:58.923 [2024-11-29 07:44:48.834636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:58.923 /dev/nbd0 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.183 1+0 records in 00:12:59.183 1+0 records out 00:12:59.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575532 s, 7.1 MB/s 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:59.183 07:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:03.387 63488+0 records in 00:13:03.387 63488+0 records out 00:13:03.387 32505856 bytes (33 MB, 31 MiB) copied, 3.72733 s, 8.7 MB/s 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:03.387 [2024-11-29 07:44:52.849625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.387 [2024-11-29 07:44:52.885656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:03.387 07:44:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.388 "name": "raid_bdev1", 00:13:03.388 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:03.388 "strip_size_kb": 0, 00:13:03.388 "state": "online", 00:13:03.388 "raid_level": "raid1", 00:13:03.388 "superblock": true, 00:13:03.388 "num_base_bdevs": 2, 00:13:03.388 "num_base_bdevs_discovered": 1, 00:13:03.388 "num_base_bdevs_operational": 1, 00:13:03.388 "base_bdevs_list": [ 00:13:03.388 { 00:13:03.388 "name": null, 00:13:03.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.388 "is_configured": false, 00:13:03.388 "data_offset": 0, 00:13:03.388 "data_size": 63488 00:13:03.388 }, 00:13:03.388 { 00:13:03.388 "name": "BaseBdev2", 00:13:03.388 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:03.388 "is_configured": true, 00:13:03.388 "data_offset": 2048, 00:13:03.388 "data_size": 63488 00:13:03.388 } 00:13:03.388 ] 00:13:03.388 }' 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.388 07:44:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.664 07:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:03.664 07:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.664 07:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.664 [2024-11-29 07:44:53.368857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.664 [2024-11-29 07:44:53.387480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:03.664 07:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.664 [2024-11-29 07:44:53.389399] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.664 07:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.604 "name": "raid_bdev1", 00:13:04.604 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:04.604 "strip_size_kb": 0, 00:13:04.604 "state": "online", 00:13:04.604 "raid_level": "raid1", 00:13:04.604 "superblock": true, 00:13:04.604 "num_base_bdevs": 2, 00:13:04.604 "num_base_bdevs_discovered": 2, 00:13:04.604 "num_base_bdevs_operational": 2, 00:13:04.604 "process": { 00:13:04.604 "type": "rebuild", 00:13:04.604 "target": "spare", 00:13:04.604 "progress": { 00:13:04.604 "blocks": 20480, 00:13:04.604 "percent": 32 00:13:04.604 } 00:13:04.604 }, 00:13:04.604 "base_bdevs_list": [ 00:13:04.604 { 00:13:04.604 "name": "spare", 00:13:04.604 "uuid": "bafe3116-93f1-5f27-b778-f40a40c03ca8", 00:13:04.604 "is_configured": true, 00:13:04.604 "data_offset": 2048, 00:13:04.604 "data_size": 63488 00:13:04.604 }, 00:13:04.604 { 00:13:04.604 "name": "BaseBdev2", 00:13:04.604 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:04.604 "is_configured": true, 00:13:04.604 "data_offset": 2048, 00:13:04.604 "data_size": 63488 00:13:04.604 } 00:13:04.604 ] 00:13:04.604 }' 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.604 07:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.604 [2024-11-29 07:44:54.532946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.864 [2024-11-29 07:44:54.594496] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:04.864 [2024-11-29 07:44:54.594559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.864 [2024-11-29 07:44:54.594575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.864 [2024-11-29 07:44:54.594587] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.864 "name": "raid_bdev1", 00:13:04.864 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:04.864 "strip_size_kb": 0, 00:13:04.864 "state": "online", 00:13:04.864 "raid_level": "raid1", 00:13:04.864 "superblock": true, 00:13:04.864 "num_base_bdevs": 2, 00:13:04.864 "num_base_bdevs_discovered": 1, 00:13:04.864 "num_base_bdevs_operational": 1, 00:13:04.864 "base_bdevs_list": [ 00:13:04.864 { 00:13:04.864 "name": null, 00:13:04.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.864 "is_configured": false, 00:13:04.864 "data_offset": 0, 00:13:04.864 "data_size": 63488 00:13:04.864 }, 00:13:04.864 { 00:13:04.864 "name": "BaseBdev2", 00:13:04.864 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:04.864 "is_configured": true, 00:13:04.864 "data_offset": 2048, 00:13:04.864 "data_size": 63488 00:13:04.864 } 00:13:04.864 ] 00:13:04.864 }' 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.864 07:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.433 "name": "raid_bdev1", 00:13:05.433 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:05.433 "strip_size_kb": 0, 00:13:05.433 "state": "online", 00:13:05.433 "raid_level": "raid1", 00:13:05.433 "superblock": true, 00:13:05.433 "num_base_bdevs": 2, 00:13:05.433 "num_base_bdevs_discovered": 1, 00:13:05.433 "num_base_bdevs_operational": 1, 00:13:05.433 "base_bdevs_list": [ 00:13:05.433 { 00:13:05.433 "name": null, 00:13:05.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.433 "is_configured": false, 00:13:05.433 "data_offset": 0, 00:13:05.433 "data_size": 63488 00:13:05.433 }, 00:13:05.433 { 00:13:05.433 "name": "BaseBdev2", 00:13:05.433 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:05.433 "is_configured": true, 00:13:05.433 "data_offset": 2048, 00:13:05.433 "data_size": 63488 00:13:05.433 } 00:13:05.433 ] 00:13:05.433 }' 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.433 [2024-11-29 07:44:55.240728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.433 [2024-11-29 07:44:55.256371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.433 07:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:05.433 [2024-11-29 07:44:55.258243] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.371 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.371 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.371 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.371 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.371 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.371 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.371 07:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.371 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.371 07:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.371 07:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.371 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.371 "name": "raid_bdev1", 00:13:06.371 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:06.371 "strip_size_kb": 0, 00:13:06.371 "state": "online", 00:13:06.371 "raid_level": "raid1", 00:13:06.371 "superblock": true, 00:13:06.371 "num_base_bdevs": 2, 00:13:06.371 "num_base_bdevs_discovered": 2, 00:13:06.371 "num_base_bdevs_operational": 2, 00:13:06.371 "process": { 00:13:06.371 "type": "rebuild", 00:13:06.371 "target": "spare", 00:13:06.371 "progress": { 00:13:06.371 "blocks": 20480, 00:13:06.371 "percent": 32 00:13:06.371 } 00:13:06.371 }, 00:13:06.371 "base_bdevs_list": [ 00:13:06.371 { 00:13:06.371 "name": "spare", 00:13:06.371 "uuid": "bafe3116-93f1-5f27-b778-f40a40c03ca8", 00:13:06.371 "is_configured": true, 00:13:06.371 "data_offset": 2048, 00:13:06.371 "data_size": 63488 00:13:06.371 }, 00:13:06.371 { 00:13:06.371 "name": "BaseBdev2", 00:13:06.371 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:06.371 "is_configured": true, 00:13:06.371 "data_offset": 2048, 00:13:06.371 "data_size": 63488 00:13:06.371 } 00:13:06.371 ] 00:13:06.371 }' 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:06.632 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=375 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.632 "name": "raid_bdev1", 00:13:06.632 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:06.632 "strip_size_kb": 0, 00:13:06.632 "state": "online", 00:13:06.632 "raid_level": "raid1", 00:13:06.632 "superblock": true, 00:13:06.632 "num_base_bdevs": 2, 00:13:06.632 "num_base_bdevs_discovered": 2, 00:13:06.632 "num_base_bdevs_operational": 2, 00:13:06.632 "process": { 00:13:06.632 "type": "rebuild", 00:13:06.632 "target": "spare", 00:13:06.632 "progress": { 00:13:06.632 "blocks": 22528, 00:13:06.632 "percent": 35 00:13:06.632 } 00:13:06.632 }, 00:13:06.632 "base_bdevs_list": [ 00:13:06.632 { 00:13:06.632 "name": "spare", 00:13:06.632 "uuid": "bafe3116-93f1-5f27-b778-f40a40c03ca8", 00:13:06.632 "is_configured": true, 00:13:06.632 "data_offset": 2048, 00:13:06.632 "data_size": 63488 00:13:06.632 }, 00:13:06.632 { 00:13:06.632 "name": "BaseBdev2", 00:13:06.632 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:06.632 "is_configured": true, 00:13:06.632 "data_offset": 2048, 00:13:06.632 "data_size": 63488 00:13:06.632 } 00:13:06.632 ] 00:13:06.632 }' 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.632 07:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.013 "name": "raid_bdev1", 00:13:08.013 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:08.013 "strip_size_kb": 0, 00:13:08.013 "state": "online", 00:13:08.013 "raid_level": "raid1", 00:13:08.013 "superblock": true, 00:13:08.013 "num_base_bdevs": 2, 00:13:08.013 "num_base_bdevs_discovered": 2, 00:13:08.013 "num_base_bdevs_operational": 2, 00:13:08.013 "process": { 00:13:08.013 "type": "rebuild", 00:13:08.013 "target": "spare", 00:13:08.013 "progress": { 00:13:08.013 "blocks": 47104, 00:13:08.013 "percent": 74 00:13:08.013 } 00:13:08.013 }, 00:13:08.013 "base_bdevs_list": [ 00:13:08.013 { 00:13:08.013 "name": "spare", 00:13:08.013 "uuid": "bafe3116-93f1-5f27-b778-f40a40c03ca8", 00:13:08.013 "is_configured": true, 00:13:08.013 "data_offset": 2048, 00:13:08.013 "data_size": 63488 00:13:08.013 }, 00:13:08.013 { 00:13:08.013 "name": "BaseBdev2", 00:13:08.013 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:08.013 "is_configured": true, 00:13:08.013 "data_offset": 2048, 00:13:08.013 "data_size": 63488 00:13:08.013 } 00:13:08.013 ] 00:13:08.013 }' 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.013 07:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:08.582 [2024-11-29 07:44:58.370656] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:08.582 [2024-11-29 07:44:58.370786] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:08.582 [2024-11-29 07:44:58.370934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.842 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.842 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.842 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.842 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.842 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.843 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.843 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.843 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.843 07:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.843 07:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.843 07:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.843 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.843 "name": "raid_bdev1", 00:13:08.843 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:08.843 "strip_size_kb": 0, 00:13:08.843 "state": "online", 00:13:08.843 "raid_level": "raid1", 00:13:08.843 "superblock": true, 00:13:08.843 "num_base_bdevs": 2, 00:13:08.843 "num_base_bdevs_discovered": 2, 00:13:08.843 "num_base_bdevs_operational": 2, 00:13:08.843 "base_bdevs_list": [ 00:13:08.843 { 00:13:08.843 "name": "spare", 00:13:08.843 "uuid": "bafe3116-93f1-5f27-b778-f40a40c03ca8", 00:13:08.843 "is_configured": true, 00:13:08.843 "data_offset": 2048, 00:13:08.843 "data_size": 63488 00:13:08.843 }, 00:13:08.843 { 00:13:08.843 "name": "BaseBdev2", 00:13:08.843 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:08.843 "is_configured": true, 00:13:08.843 "data_offset": 2048, 00:13:08.843 "data_size": 63488 00:13:08.843 } 00:13:08.843 ] 00:13:08.843 }' 00:13:08.843 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.103 "name": "raid_bdev1", 00:13:09.103 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:09.103 "strip_size_kb": 0, 00:13:09.103 "state": "online", 00:13:09.103 "raid_level": "raid1", 00:13:09.103 "superblock": true, 00:13:09.103 "num_base_bdevs": 2, 00:13:09.103 "num_base_bdevs_discovered": 2, 00:13:09.103 "num_base_bdevs_operational": 2, 00:13:09.103 "base_bdevs_list": [ 00:13:09.103 { 00:13:09.103 "name": "spare", 00:13:09.103 "uuid": "bafe3116-93f1-5f27-b778-f40a40c03ca8", 00:13:09.103 "is_configured": true, 00:13:09.103 "data_offset": 2048, 00:13:09.103 "data_size": 63488 00:13:09.103 }, 00:13:09.103 { 00:13:09.103 "name": "BaseBdev2", 00:13:09.103 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:09.103 "is_configured": true, 00:13:09.103 "data_offset": 2048, 00:13:09.103 "data_size": 63488 00:13:09.103 } 00:13:09.103 ] 00:13:09.103 }' 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.103 07:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.103 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.103 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.103 "name": "raid_bdev1", 00:13:09.103 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:09.103 "strip_size_kb": 0, 00:13:09.103 "state": "online", 00:13:09.103 "raid_level": "raid1", 00:13:09.103 "superblock": true, 00:13:09.103 "num_base_bdevs": 2, 00:13:09.103 "num_base_bdevs_discovered": 2, 00:13:09.103 "num_base_bdevs_operational": 2, 00:13:09.103 "base_bdevs_list": [ 00:13:09.103 { 00:13:09.103 "name": "spare", 00:13:09.103 "uuid": "bafe3116-93f1-5f27-b778-f40a40c03ca8", 00:13:09.103 "is_configured": true, 00:13:09.103 "data_offset": 2048, 00:13:09.103 "data_size": 63488 00:13:09.103 }, 00:13:09.103 { 00:13:09.103 "name": "BaseBdev2", 00:13:09.103 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:09.103 "is_configured": true, 00:13:09.103 "data_offset": 2048, 00:13:09.103 "data_size": 63488 00:13:09.103 } 00:13:09.103 ] 00:13:09.103 }' 00:13:09.103 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.103 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.673 [2024-11-29 07:44:59.387180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.673 [2024-11-29 07:44:59.387264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.673 [2024-11-29 07:44:59.387368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.673 [2024-11-29 07:44:59.387450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.673 [2024-11-29 07:44:59.387499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:09.673 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:09.933 /dev/nbd0 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.933 1+0 records in 00:13:09.933 1+0 records out 00:13:09.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504469 s, 8.1 MB/s 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:09.933 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:09.933 /dev/nbd1 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.193 1+0 records in 00:13:10.193 1+0 records out 00:13:10.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465528 s, 8.8 MB/s 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:10.193 07:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:10.193 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:10.193 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.193 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:10.193 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.193 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:10.193 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.193 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:10.452 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.452 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.452 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.452 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.452 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.452 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.452 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:10.452 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.452 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.452 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.712 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.712 [2024-11-29 07:45:00.525538] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:10.712 [2024-11-29 07:45:00.525594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.712 [2024-11-29 07:45:00.525619] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:10.712 [2024-11-29 07:45:00.525628] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.712 [2024-11-29 07:45:00.527877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.712 [2024-11-29 07:45:00.527949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:10.712 [2024-11-29 07:45:00.528071] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:10.712 [2024-11-29 07:45:00.528182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.713 [2024-11-29 07:45:00.528374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.713 spare 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.713 [2024-11-29 07:45:00.628324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:10.713 [2024-11-29 07:45:00.628353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:10.713 [2024-11-29 07:45:00.628621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:10.713 [2024-11-29 07:45:00.628797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:10.713 [2024-11-29 07:45:00.628807] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:10.713 [2024-11-29 07:45:00.628958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.713 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.972 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.972 "name": "raid_bdev1", 00:13:10.972 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:10.972 "strip_size_kb": 0, 00:13:10.972 "state": "online", 00:13:10.972 "raid_level": "raid1", 00:13:10.972 "superblock": true, 00:13:10.972 "num_base_bdevs": 2, 00:13:10.972 "num_base_bdevs_discovered": 2, 00:13:10.972 "num_base_bdevs_operational": 2, 00:13:10.972 "base_bdevs_list": [ 00:13:10.972 { 00:13:10.972 "name": "spare", 00:13:10.972 "uuid": "bafe3116-93f1-5f27-b778-f40a40c03ca8", 00:13:10.972 "is_configured": true, 00:13:10.972 "data_offset": 2048, 00:13:10.972 "data_size": 63488 00:13:10.972 }, 00:13:10.972 { 00:13:10.972 "name": "BaseBdev2", 00:13:10.972 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:10.972 "is_configured": true, 00:13:10.972 "data_offset": 2048, 00:13:10.972 "data_size": 63488 00:13:10.972 } 00:13:10.972 ] 00:13:10.972 }' 00:13:10.972 07:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.972 07:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.231 "name": "raid_bdev1", 00:13:11.231 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:11.231 "strip_size_kb": 0, 00:13:11.231 "state": "online", 00:13:11.231 "raid_level": "raid1", 00:13:11.231 "superblock": true, 00:13:11.231 "num_base_bdevs": 2, 00:13:11.231 "num_base_bdevs_discovered": 2, 00:13:11.231 "num_base_bdevs_operational": 2, 00:13:11.231 "base_bdevs_list": [ 00:13:11.231 { 00:13:11.231 "name": "spare", 00:13:11.231 "uuid": "bafe3116-93f1-5f27-b778-f40a40c03ca8", 00:13:11.231 "is_configured": true, 00:13:11.231 "data_offset": 2048, 00:13:11.231 "data_size": 63488 00:13:11.231 }, 00:13:11.231 { 00:13:11.231 "name": "BaseBdev2", 00:13:11.231 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:11.231 "is_configured": true, 00:13:11.231 "data_offset": 2048, 00:13:11.231 "data_size": 63488 00:13:11.231 } 00:13:11.231 ] 00:13:11.231 }' 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.231 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.489 [2024-11-29 07:45:01.248344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.489 "name": "raid_bdev1", 00:13:11.489 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:11.489 "strip_size_kb": 0, 00:13:11.489 "state": "online", 00:13:11.489 "raid_level": "raid1", 00:13:11.489 "superblock": true, 00:13:11.489 "num_base_bdevs": 2, 00:13:11.489 "num_base_bdevs_discovered": 1, 00:13:11.489 "num_base_bdevs_operational": 1, 00:13:11.489 "base_bdevs_list": [ 00:13:11.489 { 00:13:11.489 "name": null, 00:13:11.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.489 "is_configured": false, 00:13:11.489 "data_offset": 0, 00:13:11.489 "data_size": 63488 00:13:11.489 }, 00:13:11.489 { 00:13:11.489 "name": "BaseBdev2", 00:13:11.489 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:11.489 "is_configured": true, 00:13:11.489 "data_offset": 2048, 00:13:11.489 "data_size": 63488 00:13:11.489 } 00:13:11.489 ] 00:13:11.489 }' 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.489 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.749 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.749 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.749 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.749 [2024-11-29 07:45:01.655694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.749 [2024-11-29 07:45:01.655923] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:11.749 [2024-11-29 07:45:01.655940] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:11.749 [2024-11-29 07:45:01.655977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.749 [2024-11-29 07:45:01.671474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:11.749 07:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.749 07:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:11.749 [2024-11-29 07:45:01.673294] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.132 "name": "raid_bdev1", 00:13:13.132 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:13.132 "strip_size_kb": 0, 00:13:13.132 "state": "online", 00:13:13.132 "raid_level": "raid1", 00:13:13.132 "superblock": true, 00:13:13.132 "num_base_bdevs": 2, 00:13:13.132 "num_base_bdevs_discovered": 2, 00:13:13.132 "num_base_bdevs_operational": 2, 00:13:13.132 "process": { 00:13:13.132 "type": "rebuild", 00:13:13.132 "target": "spare", 00:13:13.132 "progress": { 00:13:13.132 "blocks": 20480, 00:13:13.132 "percent": 32 00:13:13.132 } 00:13:13.132 }, 00:13:13.132 "base_bdevs_list": [ 00:13:13.132 { 00:13:13.132 "name": "spare", 00:13:13.132 "uuid": "bafe3116-93f1-5f27-b778-f40a40c03ca8", 00:13:13.132 "is_configured": true, 00:13:13.132 "data_offset": 2048, 00:13:13.132 "data_size": 63488 00:13:13.132 }, 00:13:13.132 { 00:13:13.132 "name": "BaseBdev2", 00:13:13.132 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:13.132 "is_configured": true, 00:13:13.132 "data_offset": 2048, 00:13:13.132 "data_size": 63488 00:13:13.132 } 00:13:13.132 ] 00:13:13.132 }' 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.132 [2024-11-29 07:45:02.833356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.132 [2024-11-29 07:45:02.878271] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:13.132 [2024-11-29 07:45:02.878343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.132 [2024-11-29 07:45:02.878357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.132 [2024-11-29 07:45:02.878365] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.132 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.132 "name": "raid_bdev1", 00:13:13.132 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:13.132 "strip_size_kb": 0, 00:13:13.132 "state": "online", 00:13:13.132 "raid_level": "raid1", 00:13:13.132 "superblock": true, 00:13:13.132 "num_base_bdevs": 2, 00:13:13.132 "num_base_bdevs_discovered": 1, 00:13:13.132 "num_base_bdevs_operational": 1, 00:13:13.132 "base_bdevs_list": [ 00:13:13.132 { 00:13:13.132 "name": null, 00:13:13.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.132 "is_configured": false, 00:13:13.132 "data_offset": 0, 00:13:13.133 "data_size": 63488 00:13:13.133 }, 00:13:13.133 { 00:13:13.133 "name": "BaseBdev2", 00:13:13.133 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:13.133 "is_configured": true, 00:13:13.133 "data_offset": 2048, 00:13:13.133 "data_size": 63488 00:13:13.133 } 00:13:13.133 ] 00:13:13.133 }' 00:13:13.133 07:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.133 07:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.703 07:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:13.703 07:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.703 07:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.703 [2024-11-29 07:45:03.355364] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:13.703 [2024-11-29 07:45:03.355435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.703 [2024-11-29 07:45:03.355458] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:13.703 [2024-11-29 07:45:03.355469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.703 [2024-11-29 07:45:03.355973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.703 [2024-11-29 07:45:03.356009] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:13.703 [2024-11-29 07:45:03.356129] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:13.703 [2024-11-29 07:45:03.356145] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:13.703 [2024-11-29 07:45:03.356156] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:13.703 [2024-11-29 07:45:03.356186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.703 [2024-11-29 07:45:03.372362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:13.703 spare 00:13:13.703 07:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.703 07:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:13.703 [2024-11-29 07:45:03.374243] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.644 "name": "raid_bdev1", 00:13:14.644 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:14.644 "strip_size_kb": 0, 00:13:14.644 "state": "online", 00:13:14.644 "raid_level": "raid1", 00:13:14.644 "superblock": true, 00:13:14.644 "num_base_bdevs": 2, 00:13:14.644 "num_base_bdevs_discovered": 2, 00:13:14.644 "num_base_bdevs_operational": 2, 00:13:14.644 "process": { 00:13:14.644 "type": "rebuild", 00:13:14.644 "target": "spare", 00:13:14.644 "progress": { 00:13:14.644 "blocks": 20480, 00:13:14.644 "percent": 32 00:13:14.644 } 00:13:14.644 }, 00:13:14.644 "base_bdevs_list": [ 00:13:14.644 { 00:13:14.644 "name": "spare", 00:13:14.644 "uuid": "bafe3116-93f1-5f27-b778-f40a40c03ca8", 00:13:14.644 "is_configured": true, 00:13:14.644 "data_offset": 2048, 00:13:14.644 "data_size": 63488 00:13:14.644 }, 00:13:14.644 { 00:13:14.644 "name": "BaseBdev2", 00:13:14.644 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:14.644 "is_configured": true, 00:13:14.644 "data_offset": 2048, 00:13:14.644 "data_size": 63488 00:13:14.644 } 00:13:14.644 ] 00:13:14.644 }' 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.644 07:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.644 [2024-11-29 07:45:04.505412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.644 [2024-11-29 07:45:04.579529] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:14.644 [2024-11-29 07:45:04.579583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.644 [2024-11-29 07:45:04.579616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.645 [2024-11-29 07:45:04.579623] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.907 "name": "raid_bdev1", 00:13:14.907 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:14.907 "strip_size_kb": 0, 00:13:14.907 "state": "online", 00:13:14.907 "raid_level": "raid1", 00:13:14.907 "superblock": true, 00:13:14.907 "num_base_bdevs": 2, 00:13:14.907 "num_base_bdevs_discovered": 1, 00:13:14.907 "num_base_bdevs_operational": 1, 00:13:14.907 "base_bdevs_list": [ 00:13:14.907 { 00:13:14.907 "name": null, 00:13:14.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.907 "is_configured": false, 00:13:14.907 "data_offset": 0, 00:13:14.907 "data_size": 63488 00:13:14.907 }, 00:13:14.907 { 00:13:14.907 "name": "BaseBdev2", 00:13:14.907 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:14.907 "is_configured": true, 00:13:14.907 "data_offset": 2048, 00:13:14.907 "data_size": 63488 00:13:14.907 } 00:13:14.907 ] 00:13:14.907 }' 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.907 07:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.180 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:15.180 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.180 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:15.180 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:15.180 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.180 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.180 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.180 07:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.180 07:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.180 07:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.449 "name": "raid_bdev1", 00:13:15.449 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:15.449 "strip_size_kb": 0, 00:13:15.449 "state": "online", 00:13:15.449 "raid_level": "raid1", 00:13:15.449 "superblock": true, 00:13:15.449 "num_base_bdevs": 2, 00:13:15.449 "num_base_bdevs_discovered": 1, 00:13:15.449 "num_base_bdevs_operational": 1, 00:13:15.449 "base_bdevs_list": [ 00:13:15.449 { 00:13:15.449 "name": null, 00:13:15.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.449 "is_configured": false, 00:13:15.449 "data_offset": 0, 00:13:15.449 "data_size": 63488 00:13:15.449 }, 00:13:15.449 { 00:13:15.449 "name": "BaseBdev2", 00:13:15.449 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:15.449 "is_configured": true, 00:13:15.449 "data_offset": 2048, 00:13:15.449 "data_size": 63488 00:13:15.449 } 00:13:15.449 ] 00:13:15.449 }' 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.449 [2024-11-29 07:45:05.226003] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.449 [2024-11-29 07:45:05.226062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.449 [2024-11-29 07:45:05.226092] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:15.449 [2024-11-29 07:45:05.226212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.449 [2024-11-29 07:45:05.226686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.449 [2024-11-29 07:45:05.226747] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.449 [2024-11-29 07:45:05.226838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:15.449 [2024-11-29 07:45:05.226853] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:15.449 [2024-11-29 07:45:05.226864] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:15.449 [2024-11-29 07:45:05.226874] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:15.449 BaseBdev1 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.449 07:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:16.389 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.390 "name": "raid_bdev1", 00:13:16.390 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:16.390 "strip_size_kb": 0, 00:13:16.390 "state": "online", 00:13:16.390 "raid_level": "raid1", 00:13:16.390 "superblock": true, 00:13:16.390 "num_base_bdevs": 2, 00:13:16.390 "num_base_bdevs_discovered": 1, 00:13:16.390 "num_base_bdevs_operational": 1, 00:13:16.390 "base_bdevs_list": [ 00:13:16.390 { 00:13:16.390 "name": null, 00:13:16.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.390 "is_configured": false, 00:13:16.390 "data_offset": 0, 00:13:16.390 "data_size": 63488 00:13:16.390 }, 00:13:16.390 { 00:13:16.390 "name": "BaseBdev2", 00:13:16.390 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:16.390 "is_configured": true, 00:13:16.390 "data_offset": 2048, 00:13:16.390 "data_size": 63488 00:13:16.390 } 00:13:16.390 ] 00:13:16.390 }' 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.390 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.960 "name": "raid_bdev1", 00:13:16.960 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:16.960 "strip_size_kb": 0, 00:13:16.960 "state": "online", 00:13:16.960 "raid_level": "raid1", 00:13:16.960 "superblock": true, 00:13:16.960 "num_base_bdevs": 2, 00:13:16.960 "num_base_bdevs_discovered": 1, 00:13:16.960 "num_base_bdevs_operational": 1, 00:13:16.960 "base_bdevs_list": [ 00:13:16.960 { 00:13:16.960 "name": null, 00:13:16.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.960 "is_configured": false, 00:13:16.960 "data_offset": 0, 00:13:16.960 "data_size": 63488 00:13:16.960 }, 00:13:16.960 { 00:13:16.960 "name": "BaseBdev2", 00:13:16.960 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:16.960 "is_configured": true, 00:13:16.960 "data_offset": 2048, 00:13:16.960 "data_size": 63488 00:13:16.960 } 00:13:16.960 ] 00:13:16.960 }' 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.960 [2024-11-29 07:45:06.759440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.960 [2024-11-29 07:45:06.759658] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:16.960 [2024-11-29 07:45:06.759681] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:16.960 request: 00:13:16.960 { 00:13:16.960 "base_bdev": "BaseBdev1", 00:13:16.960 "raid_bdev": "raid_bdev1", 00:13:16.960 "method": "bdev_raid_add_base_bdev", 00:13:16.960 "req_id": 1 00:13:16.960 } 00:13:16.960 Got JSON-RPC error response 00:13:16.960 response: 00:13:16.960 { 00:13:16.960 "code": -22, 00:13:16.960 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:16.960 } 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:16.960 07:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:17.900 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.901 "name": "raid_bdev1", 00:13:17.901 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:17.901 "strip_size_kb": 0, 00:13:17.901 "state": "online", 00:13:17.901 "raid_level": "raid1", 00:13:17.901 "superblock": true, 00:13:17.901 "num_base_bdevs": 2, 00:13:17.901 "num_base_bdevs_discovered": 1, 00:13:17.901 "num_base_bdevs_operational": 1, 00:13:17.901 "base_bdevs_list": [ 00:13:17.901 { 00:13:17.901 "name": null, 00:13:17.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.901 "is_configured": false, 00:13:17.901 "data_offset": 0, 00:13:17.901 "data_size": 63488 00:13:17.901 }, 00:13:17.901 { 00:13:17.901 "name": "BaseBdev2", 00:13:17.901 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:17.901 "is_configured": true, 00:13:17.901 "data_offset": 2048, 00:13:17.901 "data_size": 63488 00:13:17.901 } 00:13:17.901 ] 00:13:17.901 }' 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.901 07:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.472 "name": "raid_bdev1", 00:13:18.472 "uuid": "6362be58-9905-49f4-97f1-622a02f512a1", 00:13:18.472 "strip_size_kb": 0, 00:13:18.472 "state": "online", 00:13:18.472 "raid_level": "raid1", 00:13:18.472 "superblock": true, 00:13:18.472 "num_base_bdevs": 2, 00:13:18.472 "num_base_bdevs_discovered": 1, 00:13:18.472 "num_base_bdevs_operational": 1, 00:13:18.472 "base_bdevs_list": [ 00:13:18.472 { 00:13:18.472 "name": null, 00:13:18.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.472 "is_configured": false, 00:13:18.472 "data_offset": 0, 00:13:18.472 "data_size": 63488 00:13:18.472 }, 00:13:18.472 { 00:13:18.472 "name": "BaseBdev2", 00:13:18.472 "uuid": "e3c89326-4d17-5e3e-9538-49c4e9c57b6c", 00:13:18.472 "is_configured": true, 00:13:18.472 "data_offset": 2048, 00:13:18.472 "data_size": 63488 00:13:18.472 } 00:13:18.472 ] 00:13:18.472 }' 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75475 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75475 ']' 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75475 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75475 00:13:18.472 killing process with pid 75475 00:13:18.472 Received shutdown signal, test time was about 60.000000 seconds 00:13:18.472 00:13:18.472 Latency(us) 00:13:18.472 [2024-11-29T07:45:08.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.472 [2024-11-29T07:45:08.417Z] =================================================================================================================== 00:13:18.472 [2024-11-29T07:45:08.417Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75475' 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75475 00:13:18.472 [2024-11-29 07:45:08.365593] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.472 [2024-11-29 07:45:08.365711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.472 07:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75475 00:13:18.472 [2024-11-29 07:45:08.365760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.472 [2024-11-29 07:45:08.365771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:18.732 [2024-11-29 07:45:08.661495] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:20.116 00:13:20.116 real 0m22.722s 00:13:20.116 user 0m27.782s 00:13:20.116 sys 0m3.413s 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.116 ************************************ 00:13:20.116 END TEST raid_rebuild_test_sb 00:13:20.116 ************************************ 00:13:20.116 07:45:09 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:20.116 07:45:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:20.116 07:45:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.116 07:45:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.116 ************************************ 00:13:20.116 START TEST raid_rebuild_test_io 00:13:20.116 ************************************ 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:20.116 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76194 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76194 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76194 ']' 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.117 07:45:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.117 [2024-11-29 07:45:09.954274] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:20.117 [2024-11-29 07:45:09.954492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76194 ] 00:13:20.117 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:20.117 Zero copy mechanism will not be used. 00:13:20.377 [2024-11-29 07:45:10.133148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.377 [2024-11-29 07:45:10.243286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.637 [2024-11-29 07:45:10.442670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.637 [2024-11-29 07:45:10.442755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.896 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.896 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:20.896 07:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.896 07:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:20.896 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.896 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.156 BaseBdev1_malloc 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.156 [2024-11-29 07:45:10.858254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:21.156 [2024-11-29 07:45:10.858329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.156 [2024-11-29 07:45:10.858350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:21.156 [2024-11-29 07:45:10.858361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.156 [2024-11-29 07:45:10.860349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.156 [2024-11-29 07:45:10.860388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:21.156 BaseBdev1 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.156 BaseBdev2_malloc 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.156 [2024-11-29 07:45:10.912877] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:21.156 [2024-11-29 07:45:10.912992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.156 [2024-11-29 07:45:10.913018] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:21.156 [2024-11-29 07:45:10.913030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.156 [2024-11-29 07:45:10.915125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.156 [2024-11-29 07:45:10.915161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:21.156 BaseBdev2 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.156 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.157 spare_malloc 00:13:21.157 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.157 07:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:21.157 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.157 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.157 spare_delay 00:13:21.157 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.157 07:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:21.157 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.157 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.157 [2024-11-29 07:45:10.993367] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:21.157 [2024-11-29 07:45:10.993439] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.157 [2024-11-29 07:45:10.993458] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:21.157 [2024-11-29 07:45:10.993469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.157 [2024-11-29 07:45:10.995701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.157 [2024-11-29 07:45:10.995778] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:21.157 spare 00:13:21.157 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.157 07:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:21.157 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.157 07:45:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.157 [2024-11-29 07:45:11.005396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.157 [2024-11-29 07:45:11.007216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.157 [2024-11-29 07:45:11.007349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:21.157 [2024-11-29 07:45:11.007384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:21.157 [2024-11-29 07:45:11.007665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:21.157 [2024-11-29 07:45:11.007868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:21.157 [2024-11-29 07:45:11.007913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:21.157 [2024-11-29 07:45:11.008078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.157 "name": "raid_bdev1", 00:13:21.157 "uuid": "a79d593d-fad8-4378-9d82-0b4dc2f1af16", 00:13:21.157 "strip_size_kb": 0, 00:13:21.157 "state": "online", 00:13:21.157 "raid_level": "raid1", 00:13:21.157 "superblock": false, 00:13:21.157 "num_base_bdevs": 2, 00:13:21.157 "num_base_bdevs_discovered": 2, 00:13:21.157 "num_base_bdevs_operational": 2, 00:13:21.157 "base_bdevs_list": [ 00:13:21.157 { 00:13:21.157 "name": "BaseBdev1", 00:13:21.157 "uuid": "82f121b8-8c96-576d-8686-e7c53a9c5a8d", 00:13:21.157 "is_configured": true, 00:13:21.157 "data_offset": 0, 00:13:21.157 "data_size": 65536 00:13:21.157 }, 00:13:21.157 { 00:13:21.157 "name": "BaseBdev2", 00:13:21.157 "uuid": "632d4eaf-38f4-5bea-a2da-13add0a5b8d7", 00:13:21.157 "is_configured": true, 00:13:21.157 "data_offset": 0, 00:13:21.157 "data_size": 65536 00:13:21.157 } 00:13:21.157 ] 00:13:21.157 }' 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.157 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.725 [2024-11-29 07:45:11.468892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.725 [2024-11-29 07:45:11.564421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.725 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.725 "name": "raid_bdev1", 00:13:21.726 "uuid": "a79d593d-fad8-4378-9d82-0b4dc2f1af16", 00:13:21.726 "strip_size_kb": 0, 00:13:21.726 "state": "online", 00:13:21.726 "raid_level": "raid1", 00:13:21.726 "superblock": false, 00:13:21.726 "num_base_bdevs": 2, 00:13:21.726 "num_base_bdevs_discovered": 1, 00:13:21.726 "num_base_bdevs_operational": 1, 00:13:21.726 "base_bdevs_list": [ 00:13:21.726 { 00:13:21.726 "name": null, 00:13:21.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.726 "is_configured": false, 00:13:21.726 "data_offset": 0, 00:13:21.726 "data_size": 65536 00:13:21.726 }, 00:13:21.726 { 00:13:21.726 "name": "BaseBdev2", 00:13:21.726 "uuid": "632d4eaf-38f4-5bea-a2da-13add0a5b8d7", 00:13:21.726 "is_configured": true, 00:13:21.726 "data_offset": 0, 00:13:21.726 "data_size": 65536 00:13:21.726 } 00:13:21.726 ] 00:13:21.726 }' 00:13:21.726 07:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.726 07:45:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.726 [2024-11-29 07:45:11.655963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:21.726 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:21.726 Zero copy mechanism will not be used. 00:13:21.726 Running I/O for 60 seconds... 00:13:22.295 07:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:22.295 07:45:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.295 07:45:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.295 [2024-11-29 07:45:12.014924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.295 07:45:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.295 07:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:22.295 [2024-11-29 07:45:12.082435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:22.295 [2024-11-29 07:45:12.084459] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.295 [2024-11-29 07:45:12.203702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:22.295 [2024-11-29 07:45:12.204354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:22.555 [2024-11-29 07:45:12.426066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:22.555 [2024-11-29 07:45:12.426472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:23.074 214.00 IOPS, 642.00 MiB/s [2024-11-29T07:45:13.019Z] [2024-11-29 07:45:12.780143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:23.334 [2024-11-29 07:45:13.023543] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:23.334 [2024-11-29 07:45:13.023942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:23.334 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.334 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.334 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.335 "name": "raid_bdev1", 00:13:23.335 "uuid": "a79d593d-fad8-4378-9d82-0b4dc2f1af16", 00:13:23.335 "strip_size_kb": 0, 00:13:23.335 "state": "online", 00:13:23.335 "raid_level": "raid1", 00:13:23.335 "superblock": false, 00:13:23.335 "num_base_bdevs": 2, 00:13:23.335 "num_base_bdevs_discovered": 2, 00:13:23.335 "num_base_bdevs_operational": 2, 00:13:23.335 "process": { 00:13:23.335 "type": "rebuild", 00:13:23.335 "target": "spare", 00:13:23.335 "progress": { 00:13:23.335 "blocks": 10240, 00:13:23.335 "percent": 15 00:13:23.335 } 00:13:23.335 }, 00:13:23.335 "base_bdevs_list": [ 00:13:23.335 { 00:13:23.335 "name": "spare", 00:13:23.335 "uuid": "3e3679e1-d270-569b-8439-7140a0fac7ee", 00:13:23.335 "is_configured": true, 00:13:23.335 "data_offset": 0, 00:13:23.335 "data_size": 65536 00:13:23.335 }, 00:13:23.335 { 00:13:23.335 "name": "BaseBdev2", 00:13:23.335 "uuid": "632d4eaf-38f4-5bea-a2da-13add0a5b8d7", 00:13:23.335 "is_configured": true, 00:13:23.335 "data_offset": 0, 00:13:23.335 "data_size": 65536 00:13:23.335 } 00:13:23.335 ] 00:13:23.335 }' 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.335 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.335 [2024-11-29 07:45:13.194672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.335 [2024-11-29 07:45:13.271266] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:23.596 [2024-11-29 07:45:13.278267] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:23.596 [2024-11-29 07:45:13.280537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.596 [2024-11-29 07:45:13.280572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.596 [2024-11-29 07:45:13.280586] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:23.596 [2024-11-29 07:45:13.322554] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.596 "name": "raid_bdev1", 00:13:23.596 "uuid": "a79d593d-fad8-4378-9d82-0b4dc2f1af16", 00:13:23.596 "strip_size_kb": 0, 00:13:23.596 "state": "online", 00:13:23.596 "raid_level": "raid1", 00:13:23.596 "superblock": false, 00:13:23.596 "num_base_bdevs": 2, 00:13:23.596 "num_base_bdevs_discovered": 1, 00:13:23.596 "num_base_bdevs_operational": 1, 00:13:23.596 "base_bdevs_list": [ 00:13:23.596 { 00:13:23.596 "name": null, 00:13:23.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.596 "is_configured": false, 00:13:23.596 "data_offset": 0, 00:13:23.596 "data_size": 65536 00:13:23.596 }, 00:13:23.596 { 00:13:23.596 "name": "BaseBdev2", 00:13:23.596 "uuid": "632d4eaf-38f4-5bea-a2da-13add0a5b8d7", 00:13:23.596 "is_configured": true, 00:13:23.596 "data_offset": 0, 00:13:23.596 "data_size": 65536 00:13:23.596 } 00:13:23.596 ] 00:13:23.596 }' 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.596 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.116 179.50 IOPS, 538.50 MiB/s [2024-11-29T07:45:14.061Z] 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.116 "name": "raid_bdev1", 00:13:24.116 "uuid": "a79d593d-fad8-4378-9d82-0b4dc2f1af16", 00:13:24.116 "strip_size_kb": 0, 00:13:24.116 "state": "online", 00:13:24.116 "raid_level": "raid1", 00:13:24.116 "superblock": false, 00:13:24.116 "num_base_bdevs": 2, 00:13:24.116 "num_base_bdevs_discovered": 1, 00:13:24.116 "num_base_bdevs_operational": 1, 00:13:24.116 "base_bdevs_list": [ 00:13:24.116 { 00:13:24.116 "name": null, 00:13:24.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.116 "is_configured": false, 00:13:24.116 "data_offset": 0, 00:13:24.116 "data_size": 65536 00:13:24.116 }, 00:13:24.116 { 00:13:24.116 "name": "BaseBdev2", 00:13:24.116 "uuid": "632d4eaf-38f4-5bea-a2da-13add0a5b8d7", 00:13:24.116 "is_configured": true, 00:13:24.116 "data_offset": 0, 00:13:24.116 "data_size": 65536 00:13:24.116 } 00:13:24.116 ] 00:13:24.116 }' 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.116 07:45:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.116 [2024-11-29 07:45:13.978393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.116 07:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.116 07:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:24.116 [2024-11-29 07:45:14.048532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:24.116 [2024-11-29 07:45:14.050397] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:24.377 [2024-11-29 07:45:14.163062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.377 [2024-11-29 07:45:14.163585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.636 [2024-11-29 07:45:14.371375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.636 [2024-11-29 07:45:14.371666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.896 [2024-11-29 07:45:14.607277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:24.896 172.33 IOPS, 517.00 MiB/s [2024-11-29T07:45:14.841Z] [2024-11-29 07:45:14.714283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:24.896 [2024-11-29 07:45:14.714517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:25.155 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.155 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.155 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.155 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.155 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.155 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.155 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.155 07:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.155 07:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.155 07:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.155 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.155 "name": "raid_bdev1", 00:13:25.155 "uuid": "a79d593d-fad8-4378-9d82-0b4dc2f1af16", 00:13:25.155 "strip_size_kb": 0, 00:13:25.155 "state": "online", 00:13:25.155 "raid_level": "raid1", 00:13:25.155 "superblock": false, 00:13:25.155 "num_base_bdevs": 2, 00:13:25.155 "num_base_bdevs_discovered": 2, 00:13:25.155 "num_base_bdevs_operational": 2, 00:13:25.155 "process": { 00:13:25.155 "type": "rebuild", 00:13:25.155 "target": "spare", 00:13:25.155 "progress": { 00:13:25.155 "blocks": 12288, 00:13:25.155 "percent": 18 00:13:25.155 } 00:13:25.155 }, 00:13:25.155 "base_bdevs_list": [ 00:13:25.155 { 00:13:25.155 "name": "spare", 00:13:25.155 "uuid": "3e3679e1-d270-569b-8439-7140a0fac7ee", 00:13:25.155 "is_configured": true, 00:13:25.155 "data_offset": 0, 00:13:25.155 "data_size": 65536 00:13:25.155 }, 00:13:25.155 { 00:13:25.155 "name": "BaseBdev2", 00:13:25.155 "uuid": "632d4eaf-38f4-5bea-a2da-13add0a5b8d7", 00:13:25.155 "is_configured": true, 00:13:25.155 "data_offset": 0, 00:13:25.155 "data_size": 65536 00:13:25.155 } 00:13:25.155 ] 00:13:25.155 }' 00:13:25.155 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=394 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.414 "name": "raid_bdev1", 00:13:25.414 "uuid": "a79d593d-fad8-4378-9d82-0b4dc2f1af16", 00:13:25.414 "strip_size_kb": 0, 00:13:25.414 "state": "online", 00:13:25.414 "raid_level": "raid1", 00:13:25.414 "superblock": false, 00:13:25.414 "num_base_bdevs": 2, 00:13:25.414 "num_base_bdevs_discovered": 2, 00:13:25.414 "num_base_bdevs_operational": 2, 00:13:25.414 "process": { 00:13:25.414 "type": "rebuild", 00:13:25.414 "target": "spare", 00:13:25.414 "progress": { 00:13:25.414 "blocks": 14336, 00:13:25.414 "percent": 21 00:13:25.414 } 00:13:25.414 }, 00:13:25.414 "base_bdevs_list": [ 00:13:25.414 { 00:13:25.414 "name": "spare", 00:13:25.414 "uuid": "3e3679e1-d270-569b-8439-7140a0fac7ee", 00:13:25.414 "is_configured": true, 00:13:25.414 "data_offset": 0, 00:13:25.414 "data_size": 65536 00:13:25.414 }, 00:13:25.414 { 00:13:25.414 "name": "BaseBdev2", 00:13:25.414 "uuid": "632d4eaf-38f4-5bea-a2da-13add0a5b8d7", 00:13:25.414 "is_configured": true, 00:13:25.414 "data_offset": 0, 00:13:25.414 "data_size": 65536 00:13:25.414 } 00:13:25.414 ] 00:13:25.414 }' 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.414 [2024-11-29 07:45:15.221147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.414 07:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.673 [2024-11-29 07:45:15.556555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:25.673 [2024-11-29 07:45:15.557161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:25.933 141.00 IOPS, 423.00 MiB/s [2024-11-29T07:45:15.878Z] [2024-11-29 07:45:15.767579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.501 "name": "raid_bdev1", 00:13:26.501 "uuid": "a79d593d-fad8-4378-9d82-0b4dc2f1af16", 00:13:26.501 "strip_size_kb": 0, 00:13:26.501 "state": "online", 00:13:26.501 "raid_level": "raid1", 00:13:26.501 "superblock": false, 00:13:26.501 "num_base_bdevs": 2, 00:13:26.501 "num_base_bdevs_discovered": 2, 00:13:26.501 "num_base_bdevs_operational": 2, 00:13:26.501 "process": { 00:13:26.501 "type": "rebuild", 00:13:26.501 "target": "spare", 00:13:26.501 "progress": { 00:13:26.501 "blocks": 32768, 00:13:26.501 "percent": 50 00:13:26.501 } 00:13:26.501 }, 00:13:26.501 "base_bdevs_list": [ 00:13:26.501 { 00:13:26.501 "name": "spare", 00:13:26.501 "uuid": "3e3679e1-d270-569b-8439-7140a0fac7ee", 00:13:26.501 "is_configured": true, 00:13:26.501 "data_offset": 0, 00:13:26.501 "data_size": 65536 00:13:26.501 }, 00:13:26.501 { 00:13:26.501 "name": "BaseBdev2", 00:13:26.501 "uuid": "632d4eaf-38f4-5bea-a2da-13add0a5b8d7", 00:13:26.501 "is_configured": true, 00:13:26.501 "data_offset": 0, 00:13:26.501 "data_size": 65536 00:13:26.501 } 00:13:26.501 ] 00:13:26.501 }' 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.501 [2024-11-29 07:45:16.403762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.501 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.761 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.761 07:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.761 [2024-11-29 07:45:16.632541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:27.020 122.00 IOPS, 366.00 MiB/s [2024-11-29T07:45:16.965Z] [2024-11-29 07:45:16.847000] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:27.280 [2024-11-29 07:45:17.167918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:27.540 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.540 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.540 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.540 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.540 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.540 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.540 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.540 07:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.540 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.540 07:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.800 07:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.800 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.800 "name": "raid_bdev1", 00:13:27.800 "uuid": "a79d593d-fad8-4378-9d82-0b4dc2f1af16", 00:13:27.800 "strip_size_kb": 0, 00:13:27.800 "state": "online", 00:13:27.800 "raid_level": "raid1", 00:13:27.800 "superblock": false, 00:13:27.800 "num_base_bdevs": 2, 00:13:27.800 "num_base_bdevs_discovered": 2, 00:13:27.800 "num_base_bdevs_operational": 2, 00:13:27.800 "process": { 00:13:27.800 "type": "rebuild", 00:13:27.800 "target": "spare", 00:13:27.800 "progress": { 00:13:27.800 "blocks": 49152, 00:13:27.800 "percent": 75 00:13:27.800 } 00:13:27.800 }, 00:13:27.800 "base_bdevs_list": [ 00:13:27.800 { 00:13:27.800 "name": "spare", 00:13:27.800 "uuid": "3e3679e1-d270-569b-8439-7140a0fac7ee", 00:13:27.801 "is_configured": true, 00:13:27.801 "data_offset": 0, 00:13:27.801 "data_size": 65536 00:13:27.801 }, 00:13:27.801 { 00:13:27.801 "name": "BaseBdev2", 00:13:27.801 "uuid": "632d4eaf-38f4-5bea-a2da-13add0a5b8d7", 00:13:27.801 "is_configured": true, 00:13:27.801 "data_offset": 0, 00:13:27.801 "data_size": 65536 00:13:27.801 } 00:13:27.801 ] 00:13:27.801 }' 00:13:27.801 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.801 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.801 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.801 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.801 07:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:28.370 109.33 IOPS, 328.00 MiB/s [2024-11-29T07:45:18.315Z] [2024-11-29 07:45:18.232453] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:28.629 [2024-11-29 07:45:18.337549] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:28.629 [2024-11-29 07:45:18.340126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.889 "name": "raid_bdev1", 00:13:28.889 "uuid": "a79d593d-fad8-4378-9d82-0b4dc2f1af16", 00:13:28.889 "strip_size_kb": 0, 00:13:28.889 "state": "online", 00:13:28.889 "raid_level": "raid1", 00:13:28.889 "superblock": false, 00:13:28.889 "num_base_bdevs": 2, 00:13:28.889 "num_base_bdevs_discovered": 2, 00:13:28.889 "num_base_bdevs_operational": 2, 00:13:28.889 "base_bdevs_list": [ 00:13:28.889 { 00:13:28.889 "name": "spare", 00:13:28.889 "uuid": "3e3679e1-d270-569b-8439-7140a0fac7ee", 00:13:28.889 "is_configured": true, 00:13:28.889 "data_offset": 0, 00:13:28.889 "data_size": 65536 00:13:28.889 }, 00:13:28.889 { 00:13:28.889 "name": "BaseBdev2", 00:13:28.889 "uuid": "632d4eaf-38f4-5bea-a2da-13add0a5b8d7", 00:13:28.889 "is_configured": true, 00:13:28.889 "data_offset": 0, 00:13:28.889 "data_size": 65536 00:13:28.889 } 00:13:28.889 ] 00:13:28.889 }' 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.889 97.43 IOPS, 292.29 MiB/s [2024-11-29T07:45:18.834Z] 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.889 "name": "raid_bdev1", 00:13:28.889 "uuid": "a79d593d-fad8-4378-9d82-0b4dc2f1af16", 00:13:28.889 "strip_size_kb": 0, 00:13:28.889 "state": "online", 00:13:28.889 "raid_level": "raid1", 00:13:28.889 "superblock": false, 00:13:28.889 "num_base_bdevs": 2, 00:13:28.889 "num_base_bdevs_discovered": 2, 00:13:28.889 "num_base_bdevs_operational": 2, 00:13:28.889 "base_bdevs_list": [ 00:13:28.889 { 00:13:28.889 "name": "spare", 00:13:28.889 "uuid": "3e3679e1-d270-569b-8439-7140a0fac7ee", 00:13:28.889 "is_configured": true, 00:13:28.889 "data_offset": 0, 00:13:28.889 "data_size": 65536 00:13:28.889 }, 00:13:28.889 { 00:13:28.889 "name": "BaseBdev2", 00:13:28.889 "uuid": "632d4eaf-38f4-5bea-a2da-13add0a5b8d7", 00:13:28.889 "is_configured": true, 00:13:28.889 "data_offset": 0, 00:13:28.889 "data_size": 65536 00:13:28.889 } 00:13:28.889 ] 00:13:28.889 }' 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.889 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.162 "name": "raid_bdev1", 00:13:29.162 "uuid": "a79d593d-fad8-4378-9d82-0b4dc2f1af16", 00:13:29.162 "strip_size_kb": 0, 00:13:29.162 "state": "online", 00:13:29.162 "raid_level": "raid1", 00:13:29.162 "superblock": false, 00:13:29.162 "num_base_bdevs": 2, 00:13:29.162 "num_base_bdevs_discovered": 2, 00:13:29.162 "num_base_bdevs_operational": 2, 00:13:29.162 "base_bdevs_list": [ 00:13:29.162 { 00:13:29.162 "name": "spare", 00:13:29.162 "uuid": "3e3679e1-d270-569b-8439-7140a0fac7ee", 00:13:29.162 "is_configured": true, 00:13:29.162 "data_offset": 0, 00:13:29.162 "data_size": 65536 00:13:29.162 }, 00:13:29.162 { 00:13:29.162 "name": "BaseBdev2", 00:13:29.162 "uuid": "632d4eaf-38f4-5bea-a2da-13add0a5b8d7", 00:13:29.162 "is_configured": true, 00:13:29.162 "data_offset": 0, 00:13:29.162 "data_size": 65536 00:13:29.162 } 00:13:29.162 ] 00:13:29.162 }' 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.162 07:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.446 [2024-11-29 07:45:19.218344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:29.446 [2024-11-29 07:45:19.218381] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.446 00:13:29.446 Latency(us) 00:13:29.446 [2024-11-29T07:45:19.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.446 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:29.446 raid_bdev1 : 7.67 91.17 273.51 0.00 0.00 15000.44 323.74 114015.47 00:13:29.446 [2024-11-29T07:45:19.391Z] =================================================================================================================== 00:13:29.446 [2024-11-29T07:45:19.391Z] Total : 91.17 273.51 0.00 0.00 15000.44 323.74 114015.47 00:13:29.446 [2024-11-29 07:45:19.331279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.446 [2024-11-29 07:45:19.331336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.446 [2024-11-29 07:45:19.331409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.446 [2024-11-29 07:45:19.331421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:29.446 { 00:13:29.446 "results": [ 00:13:29.446 { 00:13:29.446 "job": "raid_bdev1", 00:13:29.446 "core_mask": "0x1", 00:13:29.446 "workload": "randrw", 00:13:29.446 "percentage": 50, 00:13:29.446 "status": "finished", 00:13:29.446 "queue_depth": 2, 00:13:29.446 "io_size": 3145728, 00:13:29.446 "runtime": 7.666873, 00:13:29.446 "iops": 91.17145934202901, 00:13:29.446 "mibps": 273.514378026087, 00:13:29.446 "io_failed": 0, 00:13:29.446 "io_timeout": 0, 00:13:29.446 "avg_latency_us": 15000.442433670058, 00:13:29.446 "min_latency_us": 323.74497816593885, 00:13:29.446 "max_latency_us": 114015.46899563319 00:13:29.446 } 00:13:29.446 ], 00:13:29.446 "core_count": 1 00:13:29.446 } 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.446 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:29.720 /dev/nbd0 00:13:29.720 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:29.720 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:29.720 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:29.720 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:29.720 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.720 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.720 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:29.720 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:29.720 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.721 1+0 records in 00:13:29.721 1+0 records out 00:13:29.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389764 s, 10.5 MB/s 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.721 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:29.981 /dev/nbd1 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.981 1+0 records in 00:13:29.981 1+0 records out 00:13:29.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421768 s, 9.7 MB/s 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.981 07:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:30.241 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:30.241 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.241 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:30.241 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.241 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:30.241 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.241 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:30.500 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.500 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.500 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.501 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.501 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.501 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.501 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:30.501 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.501 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:30.501 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.501 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:30.501 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.501 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:30.501 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.501 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:30.760 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:30.760 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:30.760 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:30.760 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.760 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.760 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:30.760 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76194 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76194 ']' 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76194 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76194 00:13:30.761 killing process with pid 76194 00:13:30.761 Received shutdown signal, test time was about 8.909176 seconds 00:13:30.761 00:13:30.761 Latency(us) 00:13:30.761 [2024-11-29T07:45:20.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.761 [2024-11-29T07:45:20.706Z] =================================================================================================================== 00:13:30.761 [2024-11-29T07:45:20.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76194' 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76194 00:13:30.761 [2024-11-29 07:45:20.550026] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.761 07:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76194 00:13:31.020 [2024-11-29 07:45:20.769968] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.960 07:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:31.960 00:13:31.960 real 0m12.059s 00:13:31.960 user 0m15.192s 00:13:31.960 sys 0m1.484s 00:13:31.960 07:45:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.960 07:45:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.960 ************************************ 00:13:31.960 END TEST raid_rebuild_test_io 00:13:31.960 ************************************ 00:13:32.220 07:45:21 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:32.220 07:45:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:32.220 07:45:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.220 07:45:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.220 ************************************ 00:13:32.220 START TEST raid_rebuild_test_sb_io 00:13:32.220 ************************************ 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76571 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76571 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76571 ']' 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.220 07:45:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.220 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:32.220 Zero copy mechanism will not be used. 00:13:32.220 [2024-11-29 07:45:22.060589] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:32.220 [2024-11-29 07:45:22.060704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76571 ] 00:13:32.480 [2024-11-29 07:45:22.231432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.480 [2024-11-29 07:45:22.337172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.739 [2024-11-29 07:45:22.523257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.739 [2024-11-29 07:45:22.523300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.999 BaseBdev1_malloc 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.999 [2024-11-29 07:45:22.921357] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:32.999 [2024-11-29 07:45:22.921443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.999 [2024-11-29 07:45:22.921464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:32.999 [2024-11-29 07:45:22.921475] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.999 [2024-11-29 07:45:22.923502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.999 [2024-11-29 07:45:22.923539] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:32.999 BaseBdev1 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.999 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.260 BaseBdev2_malloc 00:13:33.260 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.260 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:33.260 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.260 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.260 [2024-11-29 07:45:22.974745] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:33.260 [2024-11-29 07:45:22.974816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.260 [2024-11-29 07:45:22.974837] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:33.260 [2024-11-29 07:45:22.974847] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.260 [2024-11-29 07:45:22.976872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.260 [2024-11-29 07:45:22.976910] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:33.260 BaseBdev2 00:13:33.260 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.260 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:33.260 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.260 07:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.260 spare_malloc 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.260 spare_delay 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.260 [2024-11-29 07:45:23.072363] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:33.260 [2024-11-29 07:45:23.072418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.260 [2024-11-29 07:45:23.072436] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:33.260 [2024-11-29 07:45:23.072446] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.260 [2024-11-29 07:45:23.074520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.260 [2024-11-29 07:45:23.074559] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:33.260 spare 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.260 [2024-11-29 07:45:23.084395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.260 [2024-11-29 07:45:23.086128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.260 [2024-11-29 07:45:23.086287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:33.260 [2024-11-29 07:45:23.086302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:33.260 [2024-11-29 07:45:23.086537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:33.260 [2024-11-29 07:45:23.086710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:33.260 [2024-11-29 07:45:23.086726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:33.260 [2024-11-29 07:45:23.086863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.260 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.260 "name": "raid_bdev1", 00:13:33.260 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:33.260 "strip_size_kb": 0, 00:13:33.260 "state": "online", 00:13:33.260 "raid_level": "raid1", 00:13:33.260 "superblock": true, 00:13:33.260 "num_base_bdevs": 2, 00:13:33.260 "num_base_bdevs_discovered": 2, 00:13:33.260 "num_base_bdevs_operational": 2, 00:13:33.260 "base_bdevs_list": [ 00:13:33.260 { 00:13:33.260 "name": "BaseBdev1", 00:13:33.260 "uuid": "e9bc99eb-7847-5e14-a13b-6edf6fdb707f", 00:13:33.260 "is_configured": true, 00:13:33.260 "data_offset": 2048, 00:13:33.260 "data_size": 63488 00:13:33.260 }, 00:13:33.260 { 00:13:33.260 "name": "BaseBdev2", 00:13:33.260 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:33.260 "is_configured": true, 00:13:33.261 "data_offset": 2048, 00:13:33.261 "data_size": 63488 00:13:33.261 } 00:13:33.261 ] 00:13:33.261 }' 00:13:33.261 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.261 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.830 [2024-11-29 07:45:23.519954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.830 [2024-11-29 07:45:23.599509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.830 "name": "raid_bdev1", 00:13:33.830 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:33.830 "strip_size_kb": 0, 00:13:33.830 "state": "online", 00:13:33.830 "raid_level": "raid1", 00:13:33.830 "superblock": true, 00:13:33.830 "num_base_bdevs": 2, 00:13:33.830 "num_base_bdevs_discovered": 1, 00:13:33.830 "num_base_bdevs_operational": 1, 00:13:33.830 "base_bdevs_list": [ 00:13:33.830 { 00:13:33.830 "name": null, 00:13:33.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.830 "is_configured": false, 00:13:33.830 "data_offset": 0, 00:13:33.830 "data_size": 63488 00:13:33.830 }, 00:13:33.830 { 00:13:33.830 "name": "BaseBdev2", 00:13:33.830 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:33.830 "is_configured": true, 00:13:33.830 "data_offset": 2048, 00:13:33.830 "data_size": 63488 00:13:33.830 } 00:13:33.830 ] 00:13:33.830 }' 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.830 07:45:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.831 [2024-11-29 07:45:23.695226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:33.831 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:33.831 Zero copy mechanism will not be used. 00:13:33.831 Running I/O for 60 seconds... 00:13:34.090 07:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:34.090 07:45:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.090 07:45:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.350 [2024-11-29 07:45:24.035249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.350 07:45:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.350 07:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:34.350 [2024-11-29 07:45:24.083507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:34.350 [2024-11-29 07:45:24.085416] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.350 [2024-11-29 07:45:24.186696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:34.350 [2024-11-29 07:45:24.187094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:34.610 [2024-11-29 07:45:24.304067] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:34.610 [2024-11-29 07:45:24.304295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:34.870 [2024-11-29 07:45:24.640934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:35.130 172.00 IOPS, 516.00 MiB/s [2024-11-29T07:45:25.075Z] [2024-11-29 07:45:24.852830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:35.130 [2024-11-29 07:45:24.853147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.389 "name": "raid_bdev1", 00:13:35.389 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:35.389 "strip_size_kb": 0, 00:13:35.389 "state": "online", 00:13:35.389 "raid_level": "raid1", 00:13:35.389 "superblock": true, 00:13:35.389 "num_base_bdevs": 2, 00:13:35.389 "num_base_bdevs_discovered": 2, 00:13:35.389 "num_base_bdevs_operational": 2, 00:13:35.389 "process": { 00:13:35.389 "type": "rebuild", 00:13:35.389 "target": "spare", 00:13:35.389 "progress": { 00:13:35.389 "blocks": 12288, 00:13:35.389 "percent": 19 00:13:35.389 } 00:13:35.389 }, 00:13:35.389 "base_bdevs_list": [ 00:13:35.389 { 00:13:35.389 "name": "spare", 00:13:35.389 "uuid": "a0498483-8695-58fa-a9d3-75f1de0b98f3", 00:13:35.389 "is_configured": true, 00:13:35.389 "data_offset": 2048, 00:13:35.389 "data_size": 63488 00:13:35.389 }, 00:13:35.389 { 00:13:35.389 "name": "BaseBdev2", 00:13:35.389 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:35.389 "is_configured": true, 00:13:35.389 "data_offset": 2048, 00:13:35.389 "data_size": 63488 00:13:35.389 } 00:13:35.389 ] 00:13:35.389 }' 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.389 [2024-11-29 07:45:25.202311] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:35.389 [2024-11-29 07:45:25.202797] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.389 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.389 [2024-11-29 07:45:25.233911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.389 [2024-11-29 07:45:25.308853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:35.389 [2024-11-29 07:45:25.309245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:35.649 [2024-11-29 07:45:25.410105] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:35.649 [2024-11-29 07:45:25.412178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.649 [2024-11-29 07:45:25.412217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.649 [2024-11-29 07:45:25.412228] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:35.649 [2024-11-29 07:45:25.461211] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.649 "name": "raid_bdev1", 00:13:35.649 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:35.649 "strip_size_kb": 0, 00:13:35.649 "state": "online", 00:13:35.649 "raid_level": "raid1", 00:13:35.649 "superblock": true, 00:13:35.649 "num_base_bdevs": 2, 00:13:35.649 "num_base_bdevs_discovered": 1, 00:13:35.649 "num_base_bdevs_operational": 1, 00:13:35.649 "base_bdevs_list": [ 00:13:35.649 { 00:13:35.649 "name": null, 00:13:35.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.649 "is_configured": false, 00:13:35.649 "data_offset": 0, 00:13:35.649 "data_size": 63488 00:13:35.649 }, 00:13:35.649 { 00:13:35.649 "name": "BaseBdev2", 00:13:35.649 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:35.649 "is_configured": true, 00:13:35.649 "data_offset": 2048, 00:13:35.649 "data_size": 63488 00:13:35.649 } 00:13:35.649 ] 00:13:35.649 }' 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.649 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.168 144.50 IOPS, 433.50 MiB/s [2024-11-29T07:45:26.113Z] 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:36.168 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.168 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:36.168 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:36.168 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.168 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.168 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.169 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.169 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.169 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.169 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.169 "name": "raid_bdev1", 00:13:36.169 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:36.169 "strip_size_kb": 0, 00:13:36.169 "state": "online", 00:13:36.169 "raid_level": "raid1", 00:13:36.169 "superblock": true, 00:13:36.169 "num_base_bdevs": 2, 00:13:36.169 "num_base_bdevs_discovered": 1, 00:13:36.169 "num_base_bdevs_operational": 1, 00:13:36.169 "base_bdevs_list": [ 00:13:36.169 { 00:13:36.169 "name": null, 00:13:36.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.169 "is_configured": false, 00:13:36.169 "data_offset": 0, 00:13:36.169 "data_size": 63488 00:13:36.169 }, 00:13:36.169 { 00:13:36.169 "name": "BaseBdev2", 00:13:36.169 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:36.169 "is_configured": true, 00:13:36.169 "data_offset": 2048, 00:13:36.169 "data_size": 63488 00:13:36.169 } 00:13:36.169 ] 00:13:36.169 }' 00:13:36.169 07:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.169 07:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:36.169 07:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.169 07:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:36.169 07:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.169 07:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.169 07:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.169 [2024-11-29 07:45:26.081908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.429 07:45:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.429 07:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:36.429 [2024-11-29 07:45:26.128655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:36.429 [2024-11-29 07:45:26.130600] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.429 [2024-11-29 07:45:26.231193] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.429 [2024-11-29 07:45:26.231605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.429 [2024-11-29 07:45:26.356578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.429 [2024-11-29 07:45:26.356815] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.689 [2024-11-29 07:45:26.586399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:36.689 [2024-11-29 07:45:26.586949] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:36.948 165.33 IOPS, 496.00 MiB/s [2024-11-29T07:45:26.893Z] [2024-11-29 07:45:26.806069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:37.208 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.208 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.208 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.208 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.208 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.208 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.208 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.208 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.208 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.208 [2024-11-29 07:45:27.129611] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:37.208 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.468 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.468 "name": "raid_bdev1", 00:13:37.468 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:37.468 "strip_size_kb": 0, 00:13:37.468 "state": "online", 00:13:37.468 "raid_level": "raid1", 00:13:37.468 "superblock": true, 00:13:37.468 "num_base_bdevs": 2, 00:13:37.468 "num_base_bdevs_discovered": 2, 00:13:37.468 "num_base_bdevs_operational": 2, 00:13:37.468 "process": { 00:13:37.468 "type": "rebuild", 00:13:37.468 "target": "spare", 00:13:37.468 "progress": { 00:13:37.468 "blocks": 14336, 00:13:37.468 "percent": 22 00:13:37.468 } 00:13:37.468 }, 00:13:37.468 "base_bdevs_list": [ 00:13:37.468 { 00:13:37.468 "name": "spare", 00:13:37.468 "uuid": "a0498483-8695-58fa-a9d3-75f1de0b98f3", 00:13:37.468 "is_configured": true, 00:13:37.468 "data_offset": 2048, 00:13:37.468 "data_size": 63488 00:13:37.468 }, 00:13:37.468 { 00:13:37.468 "name": "BaseBdev2", 00:13:37.468 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:37.468 "is_configured": true, 00:13:37.468 "data_offset": 2048, 00:13:37.468 "data_size": 63488 00:13:37.468 } 00:13:37.468 ] 00:13:37.468 }' 00:13:37.468 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.468 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.469 [2024-11-29 07:45:27.238176] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:37.469 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=406 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.469 "name": "raid_bdev1", 00:13:37.469 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:37.469 "strip_size_kb": 0, 00:13:37.469 "state": "online", 00:13:37.469 "raid_level": "raid1", 00:13:37.469 "superblock": true, 00:13:37.469 "num_base_bdevs": 2, 00:13:37.469 "num_base_bdevs_discovered": 2, 00:13:37.469 "num_base_bdevs_operational": 2, 00:13:37.469 "process": { 00:13:37.469 "type": "rebuild", 00:13:37.469 "target": "spare", 00:13:37.469 "progress": { 00:13:37.469 "blocks": 16384, 00:13:37.469 "percent": 25 00:13:37.469 } 00:13:37.469 }, 00:13:37.469 "base_bdevs_list": [ 00:13:37.469 { 00:13:37.469 "name": "spare", 00:13:37.469 "uuid": "a0498483-8695-58fa-a9d3-75f1de0b98f3", 00:13:37.469 "is_configured": true, 00:13:37.469 "data_offset": 2048, 00:13:37.469 "data_size": 63488 00:13:37.469 }, 00:13:37.469 { 00:13:37.469 "name": "BaseBdev2", 00:13:37.469 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:37.469 "is_configured": true, 00:13:37.469 "data_offset": 2048, 00:13:37.469 "data_size": 63488 00:13:37.469 } 00:13:37.469 ] 00:13:37.469 }' 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.469 07:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.729 [2024-11-29 07:45:27.442280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:37.729 [2024-11-29 07:45:27.545371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:37.729 [2024-11-29 07:45:27.545615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:37.987 142.25 IOPS, 426.75 MiB/s [2024-11-29T07:45:27.932Z] [2024-11-29 07:45:27.874125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:38.247 [2024-11-29 07:45:28.087966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:38.247 [2024-11-29 07:45:28.088241] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.507 "name": "raid_bdev1", 00:13:38.507 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:38.507 "strip_size_kb": 0, 00:13:38.507 "state": "online", 00:13:38.507 "raid_level": "raid1", 00:13:38.507 "superblock": true, 00:13:38.507 "num_base_bdevs": 2, 00:13:38.507 "num_base_bdevs_discovered": 2, 00:13:38.507 "num_base_bdevs_operational": 2, 00:13:38.507 "process": { 00:13:38.507 "type": "rebuild", 00:13:38.507 "target": "spare", 00:13:38.507 "progress": { 00:13:38.507 "blocks": 30720, 00:13:38.507 "percent": 48 00:13:38.507 } 00:13:38.507 }, 00:13:38.507 "base_bdevs_list": [ 00:13:38.507 { 00:13:38.507 "name": "spare", 00:13:38.507 "uuid": "a0498483-8695-58fa-a9d3-75f1de0b98f3", 00:13:38.507 "is_configured": true, 00:13:38.507 "data_offset": 2048, 00:13:38.507 "data_size": 63488 00:13:38.507 }, 00:13:38.507 { 00:13:38.507 "name": "BaseBdev2", 00:13:38.507 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:38.507 "is_configured": true, 00:13:38.507 "data_offset": 2048, 00:13:38.507 "data_size": 63488 00:13:38.507 } 00:13:38.507 ] 00:13:38.507 }' 00:13:38.507 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.767 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.767 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.767 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.767 07:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.767 [2024-11-29 07:45:28.529740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:38.767 [2024-11-29 07:45:28.529986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:39.707 121.00 IOPS, 363.00 MiB/s [2024-11-29T07:45:29.652Z] 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.707 "name": "raid_bdev1", 00:13:39.707 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:39.707 "strip_size_kb": 0, 00:13:39.707 "state": "online", 00:13:39.707 "raid_level": "raid1", 00:13:39.707 "superblock": true, 00:13:39.707 "num_base_bdevs": 2, 00:13:39.707 "num_base_bdevs_discovered": 2, 00:13:39.707 "num_base_bdevs_operational": 2, 00:13:39.707 "process": { 00:13:39.707 "type": "rebuild", 00:13:39.707 "target": "spare", 00:13:39.707 "progress": { 00:13:39.707 "blocks": 51200, 00:13:39.707 "percent": 80 00:13:39.707 } 00:13:39.707 }, 00:13:39.707 "base_bdevs_list": [ 00:13:39.707 { 00:13:39.707 "name": "spare", 00:13:39.707 "uuid": "a0498483-8695-58fa-a9d3-75f1de0b98f3", 00:13:39.707 "is_configured": true, 00:13:39.707 "data_offset": 2048, 00:13:39.707 "data_size": 63488 00:13:39.707 }, 00:13:39.707 { 00:13:39.707 "name": "BaseBdev2", 00:13:39.707 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:39.707 "is_configured": true, 00:13:39.707 "data_offset": 2048, 00:13:39.707 "data_size": 63488 00:13:39.707 } 00:13:39.707 ] 00:13:39.707 }' 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.707 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.967 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.967 07:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.226 107.83 IOPS, 323.50 MiB/s [2024-11-29T07:45:30.171Z] [2024-11-29 07:45:30.066137] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:40.486 [2024-11-29 07:45:30.171350] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:40.486 [2024-11-29 07:45:30.173666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.756 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.756 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.756 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.756 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.756 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.756 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.756 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.756 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.756 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.756 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.756 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.031 96.14 IOPS, 288.43 MiB/s [2024-11-29T07:45:30.976Z] 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.031 "name": "raid_bdev1", 00:13:41.031 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:41.031 "strip_size_kb": 0, 00:13:41.031 "state": "online", 00:13:41.031 "raid_level": "raid1", 00:13:41.031 "superblock": true, 00:13:41.031 "num_base_bdevs": 2, 00:13:41.031 "num_base_bdevs_discovered": 2, 00:13:41.031 "num_base_bdevs_operational": 2, 00:13:41.031 "base_bdevs_list": [ 00:13:41.031 { 00:13:41.031 "name": "spare", 00:13:41.031 "uuid": "a0498483-8695-58fa-a9d3-75f1de0b98f3", 00:13:41.031 "is_configured": true, 00:13:41.031 "data_offset": 2048, 00:13:41.031 "data_size": 63488 00:13:41.031 }, 00:13:41.031 { 00:13:41.031 "name": "BaseBdev2", 00:13:41.031 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:41.031 "is_configured": true, 00:13:41.031 "data_offset": 2048, 00:13:41.031 "data_size": 63488 00:13:41.031 } 00:13:41.031 ] 00:13:41.031 }' 00:13:41.031 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.031 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:41.031 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.031 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:41.031 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:41.031 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.031 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.031 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.031 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.031 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.031 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.032 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.032 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.032 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.032 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.032 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.032 "name": "raid_bdev1", 00:13:41.032 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:41.032 "strip_size_kb": 0, 00:13:41.032 "state": "online", 00:13:41.032 "raid_level": "raid1", 00:13:41.032 "superblock": true, 00:13:41.032 "num_base_bdevs": 2, 00:13:41.032 "num_base_bdevs_discovered": 2, 00:13:41.032 "num_base_bdevs_operational": 2, 00:13:41.032 "base_bdevs_list": [ 00:13:41.032 { 00:13:41.032 "name": "spare", 00:13:41.032 "uuid": "a0498483-8695-58fa-a9d3-75f1de0b98f3", 00:13:41.032 "is_configured": true, 00:13:41.032 "data_offset": 2048, 00:13:41.032 "data_size": 63488 00:13:41.032 }, 00:13:41.032 { 00:13:41.032 "name": "BaseBdev2", 00:13:41.032 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:41.032 "is_configured": true, 00:13:41.032 "data_offset": 2048, 00:13:41.032 "data_size": 63488 00:13:41.032 } 00:13:41.032 ] 00:13:41.032 }' 00:13:41.032 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.032 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:41.032 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.292 07:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.292 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.292 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.292 "name": "raid_bdev1", 00:13:41.292 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:41.292 "strip_size_kb": 0, 00:13:41.292 "state": "online", 00:13:41.292 "raid_level": "raid1", 00:13:41.292 "superblock": true, 00:13:41.292 "num_base_bdevs": 2, 00:13:41.292 "num_base_bdevs_discovered": 2, 00:13:41.292 "num_base_bdevs_operational": 2, 00:13:41.292 "base_bdevs_list": [ 00:13:41.292 { 00:13:41.292 "name": "spare", 00:13:41.292 "uuid": "a0498483-8695-58fa-a9d3-75f1de0b98f3", 00:13:41.292 "is_configured": true, 00:13:41.292 "data_offset": 2048, 00:13:41.292 "data_size": 63488 00:13:41.292 }, 00:13:41.292 { 00:13:41.292 "name": "BaseBdev2", 00:13:41.292 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:41.292 "is_configured": true, 00:13:41.292 "data_offset": 2048, 00:13:41.292 "data_size": 63488 00:13:41.292 } 00:13:41.292 ] 00:13:41.292 }' 00:13:41.292 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.292 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.552 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:41.552 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.552 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.552 [2024-11-29 07:45:31.466902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.552 [2024-11-29 07:45:31.466938] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.810 00:13:41.810 Latency(us) 00:13:41.810 [2024-11-29T07:45:31.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.810 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:41.810 raid_bdev1 : 7.88 89.16 267.48 0.00 0.00 15527.53 305.86 108520.75 00:13:41.810 [2024-11-29T07:45:31.755Z] =================================================================================================================== 00:13:41.810 [2024-11-29T07:45:31.755Z] Total : 89.16 267.48 0.00 0.00 15527.53 305.86 108520.75 00:13:41.810 { 00:13:41.810 "results": [ 00:13:41.810 { 00:13:41.810 "job": "raid_bdev1", 00:13:41.811 "core_mask": "0x1", 00:13:41.811 "workload": "randrw", 00:13:41.811 "percentage": 50, 00:13:41.811 "status": "finished", 00:13:41.811 "queue_depth": 2, 00:13:41.811 "io_size": 3145728, 00:13:41.811 "runtime": 7.884631, 00:13:41.811 "iops": 89.16079902788096, 00:13:41.811 "mibps": 267.48239708364287, 00:13:41.811 "io_failed": 0, 00:13:41.811 "io_timeout": 0, 00:13:41.811 "avg_latency_us": 15527.528478697035, 00:13:41.811 "min_latency_us": 305.8585152838428, 00:13:41.811 "max_latency_us": 108520.74759825328 00:13:41.811 } 00:13:41.811 ], 00:13:41.811 "core_count": 1 00:13:41.811 } 00:13:41.811 [2024-11-29 07:45:31.588313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.811 [2024-11-29 07:45:31.588383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.811 [2024-11-29 07:45:31.588457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.811 [2024-11-29 07:45:31.588466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.811 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:42.070 /dev/nbd0 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.070 1+0 records in 00:13:42.070 1+0 records out 00:13:42.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430605 s, 9.5 MB/s 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:42.070 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.071 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:42.071 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:42.071 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:42.071 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:42.071 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:42.071 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:42.071 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.071 07:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:42.330 /dev/nbd1 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.330 1+0 records in 00:13:42.330 1+0 records out 00:13:42.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056878 s, 7.2 MB/s 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.330 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.591 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.851 [2024-11-29 07:45:32.752701] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:42.851 [2024-11-29 07:45:32.752766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.851 [2024-11-29 07:45:32.752795] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:42.851 [2024-11-29 07:45:32.752805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.851 [2024-11-29 07:45:32.754984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.851 [2024-11-29 07:45:32.755083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:42.851 [2024-11-29 07:45:32.755202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:42.851 [2024-11-29 07:45:32.755252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.851 [2024-11-29 07:45:32.755398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.851 spare 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.851 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.110 [2024-11-29 07:45:32.855297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:43.110 [2024-11-29 07:45:32.855396] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:43.110 [2024-11-29 07:45:32.855741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:43.110 [2024-11-29 07:45:32.855961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:43.110 [2024-11-29 07:45:32.855973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:43.110 [2024-11-29 07:45:32.856182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.110 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.111 "name": "raid_bdev1", 00:13:43.111 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:43.111 "strip_size_kb": 0, 00:13:43.111 "state": "online", 00:13:43.111 "raid_level": "raid1", 00:13:43.111 "superblock": true, 00:13:43.111 "num_base_bdevs": 2, 00:13:43.111 "num_base_bdevs_discovered": 2, 00:13:43.111 "num_base_bdevs_operational": 2, 00:13:43.111 "base_bdevs_list": [ 00:13:43.111 { 00:13:43.111 "name": "spare", 00:13:43.111 "uuid": "a0498483-8695-58fa-a9d3-75f1de0b98f3", 00:13:43.111 "is_configured": true, 00:13:43.111 "data_offset": 2048, 00:13:43.111 "data_size": 63488 00:13:43.111 }, 00:13:43.111 { 00:13:43.111 "name": "BaseBdev2", 00:13:43.111 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:43.111 "is_configured": true, 00:13:43.111 "data_offset": 2048, 00:13:43.111 "data_size": 63488 00:13:43.111 } 00:13:43.111 ] 00:13:43.111 }' 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.111 07:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.681 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.681 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.681 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.681 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.681 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.681 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.681 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.681 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.681 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.681 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.681 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.681 "name": "raid_bdev1", 00:13:43.681 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:43.681 "strip_size_kb": 0, 00:13:43.681 "state": "online", 00:13:43.681 "raid_level": "raid1", 00:13:43.681 "superblock": true, 00:13:43.681 "num_base_bdevs": 2, 00:13:43.681 "num_base_bdevs_discovered": 2, 00:13:43.681 "num_base_bdevs_operational": 2, 00:13:43.681 "base_bdevs_list": [ 00:13:43.681 { 00:13:43.681 "name": "spare", 00:13:43.681 "uuid": "a0498483-8695-58fa-a9d3-75f1de0b98f3", 00:13:43.681 "is_configured": true, 00:13:43.681 "data_offset": 2048, 00:13:43.682 "data_size": 63488 00:13:43.682 }, 00:13:43.682 { 00:13:43.682 "name": "BaseBdev2", 00:13:43.682 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:43.682 "is_configured": true, 00:13:43.682 "data_offset": 2048, 00:13:43.682 "data_size": 63488 00:13:43.682 } 00:13:43.682 ] 00:13:43.682 }' 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.682 [2024-11-29 07:45:33.503743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.682 "name": "raid_bdev1", 00:13:43.682 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:43.682 "strip_size_kb": 0, 00:13:43.682 "state": "online", 00:13:43.682 "raid_level": "raid1", 00:13:43.682 "superblock": true, 00:13:43.682 "num_base_bdevs": 2, 00:13:43.682 "num_base_bdevs_discovered": 1, 00:13:43.682 "num_base_bdevs_operational": 1, 00:13:43.682 "base_bdevs_list": [ 00:13:43.682 { 00:13:43.682 "name": null, 00:13:43.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.682 "is_configured": false, 00:13:43.682 "data_offset": 0, 00:13:43.682 "data_size": 63488 00:13:43.682 }, 00:13:43.682 { 00:13:43.682 "name": "BaseBdev2", 00:13:43.682 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:43.682 "is_configured": true, 00:13:43.682 "data_offset": 2048, 00:13:43.682 "data_size": 63488 00:13:43.682 } 00:13:43.682 ] 00:13:43.682 }' 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.682 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.251 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.251 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.251 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.251 [2024-11-29 07:45:33.971026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.251 [2024-11-29 07:45:33.971276] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:44.251 [2024-11-29 07:45:33.971340] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:44.251 [2024-11-29 07:45:33.971416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.251 [2024-11-29 07:45:33.987678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:44.251 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.251 07:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:44.251 [2024-11-29 07:45:33.989541] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:45.190 07:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.190 07:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.190 07:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.190 07:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.190 07:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.190 07:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.190 07:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.190 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.190 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.190 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.190 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.190 "name": "raid_bdev1", 00:13:45.190 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:45.190 "strip_size_kb": 0, 00:13:45.190 "state": "online", 00:13:45.190 "raid_level": "raid1", 00:13:45.190 "superblock": true, 00:13:45.190 "num_base_bdevs": 2, 00:13:45.190 "num_base_bdevs_discovered": 2, 00:13:45.190 "num_base_bdevs_operational": 2, 00:13:45.190 "process": { 00:13:45.190 "type": "rebuild", 00:13:45.190 "target": "spare", 00:13:45.190 "progress": { 00:13:45.190 "blocks": 20480, 00:13:45.190 "percent": 32 00:13:45.190 } 00:13:45.190 }, 00:13:45.190 "base_bdevs_list": [ 00:13:45.190 { 00:13:45.190 "name": "spare", 00:13:45.190 "uuid": "a0498483-8695-58fa-a9d3-75f1de0b98f3", 00:13:45.190 "is_configured": true, 00:13:45.190 "data_offset": 2048, 00:13:45.190 "data_size": 63488 00:13:45.190 }, 00:13:45.190 { 00:13:45.190 "name": "BaseBdev2", 00:13:45.190 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:45.190 "is_configured": true, 00:13:45.190 "data_offset": 2048, 00:13:45.190 "data_size": 63488 00:13:45.190 } 00:13:45.190 ] 00:13:45.190 }' 00:13:45.190 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.190 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.190 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.190 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.190 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.190 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.190 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.190 [2024-11-29 07:45:35.125225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.449 [2024-11-29 07:45:35.194578] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:45.449 [2024-11-29 07:45:35.194702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.449 [2024-11-29 07:45:35.194737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.449 [2024-11-29 07:45:35.194760] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:45.449 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.449 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:45.449 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.449 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.449 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.449 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.449 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:45.449 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.449 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.449 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.449 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.449 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.450 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.450 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.450 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.450 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.450 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.450 "name": "raid_bdev1", 00:13:45.450 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:45.450 "strip_size_kb": 0, 00:13:45.450 "state": "online", 00:13:45.450 "raid_level": "raid1", 00:13:45.450 "superblock": true, 00:13:45.450 "num_base_bdevs": 2, 00:13:45.450 "num_base_bdevs_discovered": 1, 00:13:45.450 "num_base_bdevs_operational": 1, 00:13:45.450 "base_bdevs_list": [ 00:13:45.450 { 00:13:45.450 "name": null, 00:13:45.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.450 "is_configured": false, 00:13:45.450 "data_offset": 0, 00:13:45.450 "data_size": 63488 00:13:45.450 }, 00:13:45.450 { 00:13:45.450 "name": "BaseBdev2", 00:13:45.450 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:45.450 "is_configured": true, 00:13:45.450 "data_offset": 2048, 00:13:45.450 "data_size": 63488 00:13:45.450 } 00:13:45.450 ] 00:13:45.450 }' 00:13:45.450 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.450 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.019 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:46.019 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.019 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.019 [2024-11-29 07:45:35.704247] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:46.019 [2024-11-29 07:45:35.704379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.019 [2024-11-29 07:45:35.704419] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:46.019 [2024-11-29 07:45:35.704454] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.019 [2024-11-29 07:45:35.704971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.019 [2024-11-29 07:45:35.705049] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:46.019 [2024-11-29 07:45:35.705201] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:46.019 [2024-11-29 07:45:35.705258] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:46.019 [2024-11-29 07:45:35.705305] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:46.019 [2024-11-29 07:45:35.705367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.019 [2024-11-29 07:45:35.721769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:46.019 spare 00:13:46.019 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.019 [2024-11-29 07:45:35.723603] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.019 07:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.958 "name": "raid_bdev1", 00:13:46.958 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:46.958 "strip_size_kb": 0, 00:13:46.958 "state": "online", 00:13:46.958 "raid_level": "raid1", 00:13:46.958 "superblock": true, 00:13:46.958 "num_base_bdevs": 2, 00:13:46.958 "num_base_bdevs_discovered": 2, 00:13:46.958 "num_base_bdevs_operational": 2, 00:13:46.958 "process": { 00:13:46.958 "type": "rebuild", 00:13:46.958 "target": "spare", 00:13:46.958 "progress": { 00:13:46.958 "blocks": 20480, 00:13:46.958 "percent": 32 00:13:46.958 } 00:13:46.958 }, 00:13:46.958 "base_bdevs_list": [ 00:13:46.958 { 00:13:46.958 "name": "spare", 00:13:46.958 "uuid": "a0498483-8695-58fa-a9d3-75f1de0b98f3", 00:13:46.958 "is_configured": true, 00:13:46.958 "data_offset": 2048, 00:13:46.958 "data_size": 63488 00:13:46.958 }, 00:13:46.958 { 00:13:46.958 "name": "BaseBdev2", 00:13:46.958 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:46.958 "is_configured": true, 00:13:46.958 "data_offset": 2048, 00:13:46.958 "data_size": 63488 00:13:46.958 } 00:13:46.958 ] 00:13:46.958 }' 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.958 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.958 [2024-11-29 07:45:36.872206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.217 [2024-11-29 07:45:36.928558] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.217 [2024-11-29 07:45:36.928656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.217 [2024-11-29 07:45:36.928676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.217 [2024-11-29 07:45:36.928683] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 07:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.217 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.217 "name": "raid_bdev1", 00:13:47.217 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:47.217 "strip_size_kb": 0, 00:13:47.217 "state": "online", 00:13:47.217 "raid_level": "raid1", 00:13:47.217 "superblock": true, 00:13:47.217 "num_base_bdevs": 2, 00:13:47.217 "num_base_bdevs_discovered": 1, 00:13:47.217 "num_base_bdevs_operational": 1, 00:13:47.217 "base_bdevs_list": [ 00:13:47.217 { 00:13:47.217 "name": null, 00:13:47.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.218 "is_configured": false, 00:13:47.218 "data_offset": 0, 00:13:47.218 "data_size": 63488 00:13:47.218 }, 00:13:47.218 { 00:13:47.218 "name": "BaseBdev2", 00:13:47.218 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:47.218 "is_configured": true, 00:13:47.218 "data_offset": 2048, 00:13:47.218 "data_size": 63488 00:13:47.218 } 00:13:47.218 ] 00:13:47.218 }' 00:13:47.218 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.218 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.786 "name": "raid_bdev1", 00:13:47.786 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:47.786 "strip_size_kb": 0, 00:13:47.786 "state": "online", 00:13:47.786 "raid_level": "raid1", 00:13:47.786 "superblock": true, 00:13:47.786 "num_base_bdevs": 2, 00:13:47.786 "num_base_bdevs_discovered": 1, 00:13:47.786 "num_base_bdevs_operational": 1, 00:13:47.786 "base_bdevs_list": [ 00:13:47.786 { 00:13:47.786 "name": null, 00:13:47.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.786 "is_configured": false, 00:13:47.786 "data_offset": 0, 00:13:47.786 "data_size": 63488 00:13:47.786 }, 00:13:47.786 { 00:13:47.786 "name": "BaseBdev2", 00:13:47.786 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:47.786 "is_configured": true, 00:13:47.786 "data_offset": 2048, 00:13:47.786 "data_size": 63488 00:13:47.786 } 00:13:47.786 ] 00:13:47.786 }' 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.786 [2024-11-29 07:45:37.602133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:47.786 [2024-11-29 07:45:37.602187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.786 [2024-11-29 07:45:37.602230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:47.786 [2024-11-29 07:45:37.602242] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.786 [2024-11-29 07:45:37.602707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.786 [2024-11-29 07:45:37.602732] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.786 [2024-11-29 07:45:37.602819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:47.786 [2024-11-29 07:45:37.602834] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:47.786 [2024-11-29 07:45:37.602843] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:47.786 [2024-11-29 07:45:37.602853] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:47.786 BaseBdev1 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.786 07:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.725 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.984 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.984 "name": "raid_bdev1", 00:13:48.984 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:48.984 "strip_size_kb": 0, 00:13:48.984 "state": "online", 00:13:48.984 "raid_level": "raid1", 00:13:48.984 "superblock": true, 00:13:48.984 "num_base_bdevs": 2, 00:13:48.984 "num_base_bdevs_discovered": 1, 00:13:48.984 "num_base_bdevs_operational": 1, 00:13:48.984 "base_bdevs_list": [ 00:13:48.984 { 00:13:48.984 "name": null, 00:13:48.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.985 "is_configured": false, 00:13:48.985 "data_offset": 0, 00:13:48.985 "data_size": 63488 00:13:48.985 }, 00:13:48.985 { 00:13:48.985 "name": "BaseBdev2", 00:13:48.985 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:48.985 "is_configured": true, 00:13:48.985 "data_offset": 2048, 00:13:48.985 "data_size": 63488 00:13:48.985 } 00:13:48.985 ] 00:13:48.985 }' 00:13:48.985 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.985 07:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.245 "name": "raid_bdev1", 00:13:49.245 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:49.245 "strip_size_kb": 0, 00:13:49.245 "state": "online", 00:13:49.245 "raid_level": "raid1", 00:13:49.245 "superblock": true, 00:13:49.245 "num_base_bdevs": 2, 00:13:49.245 "num_base_bdevs_discovered": 1, 00:13:49.245 "num_base_bdevs_operational": 1, 00:13:49.245 "base_bdevs_list": [ 00:13:49.245 { 00:13:49.245 "name": null, 00:13:49.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.245 "is_configured": false, 00:13:49.245 "data_offset": 0, 00:13:49.245 "data_size": 63488 00:13:49.245 }, 00:13:49.245 { 00:13:49.245 "name": "BaseBdev2", 00:13:49.245 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:49.245 "is_configured": true, 00:13:49.245 "data_offset": 2048, 00:13:49.245 "data_size": 63488 00:13:49.245 } 00:13:49.245 ] 00:13:49.245 }' 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.245 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.504 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.504 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:49.504 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:49.504 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:49.504 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:49.505 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.505 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:49.505 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.505 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:49.505 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.505 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.505 [2024-11-29 07:45:39.228537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.505 [2024-11-29 07:45:39.228938] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:49.505 [2024-11-29 07:45:39.228967] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:49.505 request: 00:13:49.505 { 00:13:49.505 "base_bdev": "BaseBdev1", 00:13:49.505 "raid_bdev": "raid_bdev1", 00:13:49.505 "method": "bdev_raid_add_base_bdev", 00:13:49.505 "req_id": 1 00:13:49.505 } 00:13:49.505 Got JSON-RPC error response 00:13:49.505 response: 00:13:49.505 { 00:13:49.505 "code": -22, 00:13:49.505 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:49.505 } 00:13:49.505 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:49.505 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:49.505 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.505 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.505 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.505 07:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.443 "name": "raid_bdev1", 00:13:50.443 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:50.443 "strip_size_kb": 0, 00:13:50.443 "state": "online", 00:13:50.443 "raid_level": "raid1", 00:13:50.443 "superblock": true, 00:13:50.443 "num_base_bdevs": 2, 00:13:50.443 "num_base_bdevs_discovered": 1, 00:13:50.443 "num_base_bdevs_operational": 1, 00:13:50.443 "base_bdevs_list": [ 00:13:50.443 { 00:13:50.443 "name": null, 00:13:50.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.443 "is_configured": false, 00:13:50.443 "data_offset": 0, 00:13:50.443 "data_size": 63488 00:13:50.443 }, 00:13:50.443 { 00:13:50.443 "name": "BaseBdev2", 00:13:50.443 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:50.443 "is_configured": true, 00:13:50.443 "data_offset": 2048, 00:13:50.443 "data_size": 63488 00:13:50.443 } 00:13:50.443 ] 00:13:50.443 }' 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.443 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.014 "name": "raid_bdev1", 00:13:51.014 "uuid": "fd0fe261-630a-4081-a7e5-7aba88502a04", 00:13:51.014 "strip_size_kb": 0, 00:13:51.014 "state": "online", 00:13:51.014 "raid_level": "raid1", 00:13:51.014 "superblock": true, 00:13:51.014 "num_base_bdevs": 2, 00:13:51.014 "num_base_bdevs_discovered": 1, 00:13:51.014 "num_base_bdevs_operational": 1, 00:13:51.014 "base_bdevs_list": [ 00:13:51.014 { 00:13:51.014 "name": null, 00:13:51.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.014 "is_configured": false, 00:13:51.014 "data_offset": 0, 00:13:51.014 "data_size": 63488 00:13:51.014 }, 00:13:51.014 { 00:13:51.014 "name": "BaseBdev2", 00:13:51.014 "uuid": "a0e144e9-9c9d-5bec-8ba5-c1a42475f52d", 00:13:51.014 "is_configured": true, 00:13:51.014 "data_offset": 2048, 00:13:51.014 "data_size": 63488 00:13:51.014 } 00:13:51.014 ] 00:13:51.014 }' 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76571 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76571 ']' 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76571 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76571 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.014 killing process with pid 76571 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76571' 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76571 00:13:51.014 Received shutdown signal, test time was about 17.207647 seconds 00:13:51.014 00:13:51.014 Latency(us) 00:13:51.014 [2024-11-29T07:45:40.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.014 [2024-11-29T07:45:40.959Z] =================================================================================================================== 00:13:51.014 [2024-11-29T07:45:40.959Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:51.014 07:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76571 00:13:51.014 [2024-11-29 07:45:40.871778] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.014 [2024-11-29 07:45:40.871994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.014 [2024-11-29 07:45:40.872086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.014 [2024-11-29 07:45:40.872161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:51.272 [2024-11-29 07:45:41.096344] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.675 07:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:52.675 00:13:52.675 real 0m20.264s 00:13:52.676 user 0m26.538s 00:13:52.676 sys 0m2.147s 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.676 ************************************ 00:13:52.676 END TEST raid_rebuild_test_sb_io 00:13:52.676 ************************************ 00:13:52.676 07:45:42 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:52.676 07:45:42 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:52.676 07:45:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:52.676 07:45:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.676 07:45:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.676 ************************************ 00:13:52.676 START TEST raid_rebuild_test 00:13:52.676 ************************************ 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77266 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77266 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77266 ']' 00:13:52.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.676 07:45:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.676 [2024-11-29 07:45:42.399719] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:52.676 [2024-11-29 07:45:42.399933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:52.676 Zero copy mechanism will not be used. 00:13:52.676 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77266 ] 00:13:52.676 [2024-11-29 07:45:42.552303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.936 [2024-11-29 07:45:42.660612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.936 [2024-11-29 07:45:42.848372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.936 [2024-11-29 07:45:42.848496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.505 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.505 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:53.505 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.505 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:53.505 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.505 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.505 BaseBdev1_malloc 00:13:53.505 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.506 [2024-11-29 07:45:43.266981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:53.506 [2024-11-29 07:45:43.267055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.506 [2024-11-29 07:45:43.267076] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:53.506 [2024-11-29 07:45:43.267087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.506 [2024-11-29 07:45:43.269144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.506 [2024-11-29 07:45:43.269181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.506 BaseBdev1 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.506 BaseBdev2_malloc 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.506 [2024-11-29 07:45:43.317369] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:53.506 [2024-11-29 07:45:43.317428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.506 [2024-11-29 07:45:43.317450] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:53.506 [2024-11-29 07:45:43.317461] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.506 [2024-11-29 07:45:43.319493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.506 [2024-11-29 07:45:43.319531] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:53.506 BaseBdev2 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.506 BaseBdev3_malloc 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.506 [2024-11-29 07:45:43.380820] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:53.506 [2024-11-29 07:45:43.380871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.506 [2024-11-29 07:45:43.380891] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:53.506 [2024-11-29 07:45:43.380904] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.506 [2024-11-29 07:45:43.382905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.506 [2024-11-29 07:45:43.382942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:53.506 BaseBdev3 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.506 BaseBdev4_malloc 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.506 [2024-11-29 07:45:43.434602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:53.506 [2024-11-29 07:45:43.434659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.506 [2024-11-29 07:45:43.434678] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:53.506 [2024-11-29 07:45:43.434688] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.506 [2024-11-29 07:45:43.436758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.506 [2024-11-29 07:45:43.436797] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:53.506 BaseBdev4 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.506 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.766 spare_malloc 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.766 spare_delay 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.766 [2024-11-29 07:45:43.499373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:53.766 [2024-11-29 07:45:43.499421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.766 [2024-11-29 07:45:43.499438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:53.766 [2024-11-29 07:45:43.499449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.766 [2024-11-29 07:45:43.501430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.766 [2024-11-29 07:45:43.501464] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:53.766 spare 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.766 [2024-11-29 07:45:43.511395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.766 [2024-11-29 07:45:43.513120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:53.766 [2024-11-29 07:45:43.513212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.766 [2024-11-29 07:45:43.513260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:53.766 [2024-11-29 07:45:43.513335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:53.766 [2024-11-29 07:45:43.513347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:53.766 [2024-11-29 07:45:43.513596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:53.766 [2024-11-29 07:45:43.513781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:53.766 [2024-11-29 07:45:43.513798] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:53.766 [2024-11-29 07:45:43.513954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.766 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.766 "name": "raid_bdev1", 00:13:53.766 "uuid": "6490962d-1ff3-46b9-a8b3-81a67dcdd69b", 00:13:53.766 "strip_size_kb": 0, 00:13:53.766 "state": "online", 00:13:53.766 "raid_level": "raid1", 00:13:53.766 "superblock": false, 00:13:53.766 "num_base_bdevs": 4, 00:13:53.766 "num_base_bdevs_discovered": 4, 00:13:53.766 "num_base_bdevs_operational": 4, 00:13:53.766 "base_bdevs_list": [ 00:13:53.766 { 00:13:53.766 "name": "BaseBdev1", 00:13:53.766 "uuid": "1a0fadfe-6b1e-5b2f-800d-2d009c7aa5c4", 00:13:53.766 "is_configured": true, 00:13:53.766 "data_offset": 0, 00:13:53.766 "data_size": 65536 00:13:53.766 }, 00:13:53.766 { 00:13:53.766 "name": "BaseBdev2", 00:13:53.766 "uuid": "6518fdcb-d841-59c9-8c94-042f85fa5e5f", 00:13:53.766 "is_configured": true, 00:13:53.766 "data_offset": 0, 00:13:53.766 "data_size": 65536 00:13:53.766 }, 00:13:53.766 { 00:13:53.766 "name": "BaseBdev3", 00:13:53.766 "uuid": "4a9fb33c-5405-5015-99a2-d0244e811707", 00:13:53.766 "is_configured": true, 00:13:53.766 "data_offset": 0, 00:13:53.766 "data_size": 65536 00:13:53.767 }, 00:13:53.767 { 00:13:53.767 "name": "BaseBdev4", 00:13:53.767 "uuid": "af91c031-4c58-57ee-b0d4-a7b0db6d55f1", 00:13:53.767 "is_configured": true, 00:13:53.767 "data_offset": 0, 00:13:53.767 "data_size": 65536 00:13:53.767 } 00:13:53.767 ] 00:13:53.767 }' 00:13:53.767 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.767 07:45:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.337 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:54.337 07:45:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.337 [2024-11-29 07:45:44.006902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:54.337 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.338 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:54.338 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.338 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:54.338 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.338 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.338 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:54.338 [2024-11-29 07:45:44.258196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:54.338 /dev/nbd0 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.598 1+0 records in 00:13:54.598 1+0 records out 00:13:54.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027313 s, 15.0 MB/s 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:54.598 07:45:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:59.875 65536+0 records in 00:13:59.875 65536+0 records out 00:13:59.875 33554432 bytes (34 MB, 32 MiB) copied, 5.22531 s, 6.4 MB/s 00:13:59.875 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:59.875 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:59.875 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:59.875 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:59.875 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:59.875 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:59.876 [2024-11-29 07:45:49.745028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.876 [2024-11-29 07:45:49.777080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.876 07:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.135 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.135 "name": "raid_bdev1", 00:14:00.135 "uuid": "6490962d-1ff3-46b9-a8b3-81a67dcdd69b", 00:14:00.135 "strip_size_kb": 0, 00:14:00.135 "state": "online", 00:14:00.135 "raid_level": "raid1", 00:14:00.135 "superblock": false, 00:14:00.135 "num_base_bdevs": 4, 00:14:00.135 "num_base_bdevs_discovered": 3, 00:14:00.135 "num_base_bdevs_operational": 3, 00:14:00.135 "base_bdevs_list": [ 00:14:00.135 { 00:14:00.135 "name": null, 00:14:00.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.135 "is_configured": false, 00:14:00.135 "data_offset": 0, 00:14:00.135 "data_size": 65536 00:14:00.135 }, 00:14:00.135 { 00:14:00.135 "name": "BaseBdev2", 00:14:00.135 "uuid": "6518fdcb-d841-59c9-8c94-042f85fa5e5f", 00:14:00.135 "is_configured": true, 00:14:00.135 "data_offset": 0, 00:14:00.135 "data_size": 65536 00:14:00.135 }, 00:14:00.135 { 00:14:00.135 "name": "BaseBdev3", 00:14:00.135 "uuid": "4a9fb33c-5405-5015-99a2-d0244e811707", 00:14:00.135 "is_configured": true, 00:14:00.135 "data_offset": 0, 00:14:00.135 "data_size": 65536 00:14:00.135 }, 00:14:00.135 { 00:14:00.135 "name": "BaseBdev4", 00:14:00.135 "uuid": "af91c031-4c58-57ee-b0d4-a7b0db6d55f1", 00:14:00.135 "is_configured": true, 00:14:00.135 "data_offset": 0, 00:14:00.135 "data_size": 65536 00:14:00.135 } 00:14:00.135 ] 00:14:00.135 }' 00:14:00.135 07:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.135 07:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.395 07:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:00.395 07:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.395 07:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.395 [2024-11-29 07:45:50.216308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.395 [2024-11-29 07:45:50.231582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:00.395 07:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.395 07:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:00.395 [2024-11-29 07:45:50.233474] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:01.351 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.351 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.351 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.351 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.351 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.351 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.351 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.351 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.351 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.351 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.619 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.619 "name": "raid_bdev1", 00:14:01.619 "uuid": "6490962d-1ff3-46b9-a8b3-81a67dcdd69b", 00:14:01.619 "strip_size_kb": 0, 00:14:01.619 "state": "online", 00:14:01.619 "raid_level": "raid1", 00:14:01.619 "superblock": false, 00:14:01.619 "num_base_bdevs": 4, 00:14:01.619 "num_base_bdevs_discovered": 4, 00:14:01.619 "num_base_bdevs_operational": 4, 00:14:01.619 "process": { 00:14:01.619 "type": "rebuild", 00:14:01.619 "target": "spare", 00:14:01.619 "progress": { 00:14:01.619 "blocks": 20480, 00:14:01.619 "percent": 31 00:14:01.619 } 00:14:01.619 }, 00:14:01.619 "base_bdevs_list": [ 00:14:01.619 { 00:14:01.619 "name": "spare", 00:14:01.619 "uuid": "10ee2a65-4c9e-528e-aa9d-d2c068777503", 00:14:01.619 "is_configured": true, 00:14:01.619 "data_offset": 0, 00:14:01.619 "data_size": 65536 00:14:01.619 }, 00:14:01.619 { 00:14:01.619 "name": "BaseBdev2", 00:14:01.619 "uuid": "6518fdcb-d841-59c9-8c94-042f85fa5e5f", 00:14:01.619 "is_configured": true, 00:14:01.619 "data_offset": 0, 00:14:01.619 "data_size": 65536 00:14:01.619 }, 00:14:01.619 { 00:14:01.620 "name": "BaseBdev3", 00:14:01.620 "uuid": "4a9fb33c-5405-5015-99a2-d0244e811707", 00:14:01.620 "is_configured": true, 00:14:01.620 "data_offset": 0, 00:14:01.620 "data_size": 65536 00:14:01.620 }, 00:14:01.620 { 00:14:01.620 "name": "BaseBdev4", 00:14:01.620 "uuid": "af91c031-4c58-57ee-b0d4-a7b0db6d55f1", 00:14:01.620 "is_configured": true, 00:14:01.620 "data_offset": 0, 00:14:01.620 "data_size": 65536 00:14:01.620 } 00:14:01.620 ] 00:14:01.620 }' 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.620 [2024-11-29 07:45:51.360975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.620 [2024-11-29 07:45:51.438451] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:01.620 [2024-11-29 07:45:51.438532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.620 [2024-11-29 07:45:51.438548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.620 [2024-11-29 07:45:51.438557] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.620 "name": "raid_bdev1", 00:14:01.620 "uuid": "6490962d-1ff3-46b9-a8b3-81a67dcdd69b", 00:14:01.620 "strip_size_kb": 0, 00:14:01.620 "state": "online", 00:14:01.620 "raid_level": "raid1", 00:14:01.620 "superblock": false, 00:14:01.620 "num_base_bdevs": 4, 00:14:01.620 "num_base_bdevs_discovered": 3, 00:14:01.620 "num_base_bdevs_operational": 3, 00:14:01.620 "base_bdevs_list": [ 00:14:01.620 { 00:14:01.620 "name": null, 00:14:01.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.620 "is_configured": false, 00:14:01.620 "data_offset": 0, 00:14:01.620 "data_size": 65536 00:14:01.620 }, 00:14:01.620 { 00:14:01.620 "name": "BaseBdev2", 00:14:01.620 "uuid": "6518fdcb-d841-59c9-8c94-042f85fa5e5f", 00:14:01.620 "is_configured": true, 00:14:01.620 "data_offset": 0, 00:14:01.620 "data_size": 65536 00:14:01.620 }, 00:14:01.620 { 00:14:01.620 "name": "BaseBdev3", 00:14:01.620 "uuid": "4a9fb33c-5405-5015-99a2-d0244e811707", 00:14:01.620 "is_configured": true, 00:14:01.620 "data_offset": 0, 00:14:01.620 "data_size": 65536 00:14:01.620 }, 00:14:01.620 { 00:14:01.620 "name": "BaseBdev4", 00:14:01.620 "uuid": "af91c031-4c58-57ee-b0d4-a7b0db6d55f1", 00:14:01.620 "is_configured": true, 00:14:01.620 "data_offset": 0, 00:14:01.620 "data_size": 65536 00:14:01.620 } 00:14:01.620 ] 00:14:01.620 }' 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.620 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.189 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.189 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.189 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.189 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.189 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.189 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.189 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.189 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.189 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.189 07:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.189 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.189 "name": "raid_bdev1", 00:14:02.189 "uuid": "6490962d-1ff3-46b9-a8b3-81a67dcdd69b", 00:14:02.189 "strip_size_kb": 0, 00:14:02.189 "state": "online", 00:14:02.189 "raid_level": "raid1", 00:14:02.189 "superblock": false, 00:14:02.189 "num_base_bdevs": 4, 00:14:02.189 "num_base_bdevs_discovered": 3, 00:14:02.189 "num_base_bdevs_operational": 3, 00:14:02.189 "base_bdevs_list": [ 00:14:02.189 { 00:14:02.189 "name": null, 00:14:02.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.189 "is_configured": false, 00:14:02.189 "data_offset": 0, 00:14:02.189 "data_size": 65536 00:14:02.189 }, 00:14:02.189 { 00:14:02.189 "name": "BaseBdev2", 00:14:02.189 "uuid": "6518fdcb-d841-59c9-8c94-042f85fa5e5f", 00:14:02.189 "is_configured": true, 00:14:02.189 "data_offset": 0, 00:14:02.189 "data_size": 65536 00:14:02.189 }, 00:14:02.189 { 00:14:02.189 "name": "BaseBdev3", 00:14:02.189 "uuid": "4a9fb33c-5405-5015-99a2-d0244e811707", 00:14:02.189 "is_configured": true, 00:14:02.189 "data_offset": 0, 00:14:02.189 "data_size": 65536 00:14:02.189 }, 00:14:02.189 { 00:14:02.189 "name": "BaseBdev4", 00:14:02.189 "uuid": "af91c031-4c58-57ee-b0d4-a7b0db6d55f1", 00:14:02.189 "is_configured": true, 00:14:02.189 "data_offset": 0, 00:14:02.189 "data_size": 65536 00:14:02.189 } 00:14:02.189 ] 00:14:02.189 }' 00:14:02.189 07:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.189 07:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.189 07:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.189 07:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.189 07:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.189 07:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.189 07:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.189 [2024-11-29 07:45:52.075034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.189 [2024-11-29 07:45:52.089504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:02.189 [2024-11-29 07:45:52.091404] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.189 07:45:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.189 07:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:03.570 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.570 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.570 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.571 "name": "raid_bdev1", 00:14:03.571 "uuid": "6490962d-1ff3-46b9-a8b3-81a67dcdd69b", 00:14:03.571 "strip_size_kb": 0, 00:14:03.571 "state": "online", 00:14:03.571 "raid_level": "raid1", 00:14:03.571 "superblock": false, 00:14:03.571 "num_base_bdevs": 4, 00:14:03.571 "num_base_bdevs_discovered": 4, 00:14:03.571 "num_base_bdevs_operational": 4, 00:14:03.571 "process": { 00:14:03.571 "type": "rebuild", 00:14:03.571 "target": "spare", 00:14:03.571 "progress": { 00:14:03.571 "blocks": 20480, 00:14:03.571 "percent": 31 00:14:03.571 } 00:14:03.571 }, 00:14:03.571 "base_bdevs_list": [ 00:14:03.571 { 00:14:03.571 "name": "spare", 00:14:03.571 "uuid": "10ee2a65-4c9e-528e-aa9d-d2c068777503", 00:14:03.571 "is_configured": true, 00:14:03.571 "data_offset": 0, 00:14:03.571 "data_size": 65536 00:14:03.571 }, 00:14:03.571 { 00:14:03.571 "name": "BaseBdev2", 00:14:03.571 "uuid": "6518fdcb-d841-59c9-8c94-042f85fa5e5f", 00:14:03.571 "is_configured": true, 00:14:03.571 "data_offset": 0, 00:14:03.571 "data_size": 65536 00:14:03.571 }, 00:14:03.571 { 00:14:03.571 "name": "BaseBdev3", 00:14:03.571 "uuid": "4a9fb33c-5405-5015-99a2-d0244e811707", 00:14:03.571 "is_configured": true, 00:14:03.571 "data_offset": 0, 00:14:03.571 "data_size": 65536 00:14:03.571 }, 00:14:03.571 { 00:14:03.571 "name": "BaseBdev4", 00:14:03.571 "uuid": "af91c031-4c58-57ee-b0d4-a7b0db6d55f1", 00:14:03.571 "is_configured": true, 00:14:03.571 "data_offset": 0, 00:14:03.571 "data_size": 65536 00:14:03.571 } 00:14:03.571 ] 00:14:03.571 }' 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.571 [2024-11-29 07:45:53.254860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:03.571 [2024-11-29 07:45:53.296510] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.571 "name": "raid_bdev1", 00:14:03.571 "uuid": "6490962d-1ff3-46b9-a8b3-81a67dcdd69b", 00:14:03.571 "strip_size_kb": 0, 00:14:03.571 "state": "online", 00:14:03.571 "raid_level": "raid1", 00:14:03.571 "superblock": false, 00:14:03.571 "num_base_bdevs": 4, 00:14:03.571 "num_base_bdevs_discovered": 3, 00:14:03.571 "num_base_bdevs_operational": 3, 00:14:03.571 "process": { 00:14:03.571 "type": "rebuild", 00:14:03.571 "target": "spare", 00:14:03.571 "progress": { 00:14:03.571 "blocks": 24576, 00:14:03.571 "percent": 37 00:14:03.571 } 00:14:03.571 }, 00:14:03.571 "base_bdevs_list": [ 00:14:03.571 { 00:14:03.571 "name": "spare", 00:14:03.571 "uuid": "10ee2a65-4c9e-528e-aa9d-d2c068777503", 00:14:03.571 "is_configured": true, 00:14:03.571 "data_offset": 0, 00:14:03.571 "data_size": 65536 00:14:03.571 }, 00:14:03.571 { 00:14:03.571 "name": null, 00:14:03.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.571 "is_configured": false, 00:14:03.571 "data_offset": 0, 00:14:03.571 "data_size": 65536 00:14:03.571 }, 00:14:03.571 { 00:14:03.571 "name": "BaseBdev3", 00:14:03.571 "uuid": "4a9fb33c-5405-5015-99a2-d0244e811707", 00:14:03.571 "is_configured": true, 00:14:03.571 "data_offset": 0, 00:14:03.571 "data_size": 65536 00:14:03.571 }, 00:14:03.571 { 00:14:03.571 "name": "BaseBdev4", 00:14:03.571 "uuid": "af91c031-4c58-57ee-b0d4-a7b0db6d55f1", 00:14:03.571 "is_configured": true, 00:14:03.571 "data_offset": 0, 00:14:03.571 "data_size": 65536 00:14:03.571 } 00:14:03.571 ] 00:14:03.571 }' 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=432 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.571 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.572 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.572 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.572 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.572 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.572 07:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.572 07:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.572 07:45:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.572 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.572 "name": "raid_bdev1", 00:14:03.572 "uuid": "6490962d-1ff3-46b9-a8b3-81a67dcdd69b", 00:14:03.572 "strip_size_kb": 0, 00:14:03.572 "state": "online", 00:14:03.572 "raid_level": "raid1", 00:14:03.572 "superblock": false, 00:14:03.572 "num_base_bdevs": 4, 00:14:03.572 "num_base_bdevs_discovered": 3, 00:14:03.572 "num_base_bdevs_operational": 3, 00:14:03.572 "process": { 00:14:03.572 "type": "rebuild", 00:14:03.572 "target": "spare", 00:14:03.572 "progress": { 00:14:03.572 "blocks": 26624, 00:14:03.572 "percent": 40 00:14:03.572 } 00:14:03.572 }, 00:14:03.572 "base_bdevs_list": [ 00:14:03.572 { 00:14:03.572 "name": "spare", 00:14:03.572 "uuid": "10ee2a65-4c9e-528e-aa9d-d2c068777503", 00:14:03.572 "is_configured": true, 00:14:03.572 "data_offset": 0, 00:14:03.572 "data_size": 65536 00:14:03.572 }, 00:14:03.572 { 00:14:03.572 "name": null, 00:14:03.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.572 "is_configured": false, 00:14:03.572 "data_offset": 0, 00:14:03.572 "data_size": 65536 00:14:03.572 }, 00:14:03.572 { 00:14:03.572 "name": "BaseBdev3", 00:14:03.572 "uuid": "4a9fb33c-5405-5015-99a2-d0244e811707", 00:14:03.572 "is_configured": true, 00:14:03.572 "data_offset": 0, 00:14:03.572 "data_size": 65536 00:14:03.572 }, 00:14:03.572 { 00:14:03.572 "name": "BaseBdev4", 00:14:03.572 "uuid": "af91c031-4c58-57ee-b0d4-a7b0db6d55f1", 00:14:03.572 "is_configured": true, 00:14:03.572 "data_offset": 0, 00:14:03.572 "data_size": 65536 00:14:03.572 } 00:14:03.572 ] 00:14:03.572 }' 00:14:03.572 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.830 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.830 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.830 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.830 07:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:04.767 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.767 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.767 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.767 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.767 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.767 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.767 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.768 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.768 07:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.768 07:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.768 07:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.768 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.768 "name": "raid_bdev1", 00:14:04.768 "uuid": "6490962d-1ff3-46b9-a8b3-81a67dcdd69b", 00:14:04.768 "strip_size_kb": 0, 00:14:04.768 "state": "online", 00:14:04.768 "raid_level": "raid1", 00:14:04.768 "superblock": false, 00:14:04.768 "num_base_bdevs": 4, 00:14:04.768 "num_base_bdevs_discovered": 3, 00:14:04.768 "num_base_bdevs_operational": 3, 00:14:04.768 "process": { 00:14:04.768 "type": "rebuild", 00:14:04.768 "target": "spare", 00:14:04.768 "progress": { 00:14:04.768 "blocks": 49152, 00:14:04.768 "percent": 75 00:14:04.768 } 00:14:04.768 }, 00:14:04.768 "base_bdevs_list": [ 00:14:04.768 { 00:14:04.768 "name": "spare", 00:14:04.768 "uuid": "10ee2a65-4c9e-528e-aa9d-d2c068777503", 00:14:04.768 "is_configured": true, 00:14:04.768 "data_offset": 0, 00:14:04.768 "data_size": 65536 00:14:04.768 }, 00:14:04.768 { 00:14:04.768 "name": null, 00:14:04.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.768 "is_configured": false, 00:14:04.768 "data_offset": 0, 00:14:04.768 "data_size": 65536 00:14:04.768 }, 00:14:04.768 { 00:14:04.768 "name": "BaseBdev3", 00:14:04.768 "uuid": "4a9fb33c-5405-5015-99a2-d0244e811707", 00:14:04.768 "is_configured": true, 00:14:04.768 "data_offset": 0, 00:14:04.768 "data_size": 65536 00:14:04.768 }, 00:14:04.768 { 00:14:04.768 "name": "BaseBdev4", 00:14:04.768 "uuid": "af91c031-4c58-57ee-b0d4-a7b0db6d55f1", 00:14:04.768 "is_configured": true, 00:14:04.768 "data_offset": 0, 00:14:04.768 "data_size": 65536 00:14:04.768 } 00:14:04.768 ] 00:14:04.768 }' 00:14:04.768 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.768 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.768 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.768 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.768 07:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.714 [2024-11-29 07:45:55.304382] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:05.714 [2024-11-29 07:45:55.304463] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:05.714 [2024-11-29 07:45:55.304517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.973 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.973 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.974 "name": "raid_bdev1", 00:14:05.974 "uuid": "6490962d-1ff3-46b9-a8b3-81a67dcdd69b", 00:14:05.974 "strip_size_kb": 0, 00:14:05.974 "state": "online", 00:14:05.974 "raid_level": "raid1", 00:14:05.974 "superblock": false, 00:14:05.974 "num_base_bdevs": 4, 00:14:05.974 "num_base_bdevs_discovered": 3, 00:14:05.974 "num_base_bdevs_operational": 3, 00:14:05.974 "base_bdevs_list": [ 00:14:05.974 { 00:14:05.974 "name": "spare", 00:14:05.974 "uuid": "10ee2a65-4c9e-528e-aa9d-d2c068777503", 00:14:05.974 "is_configured": true, 00:14:05.974 "data_offset": 0, 00:14:05.974 "data_size": 65536 00:14:05.974 }, 00:14:05.974 { 00:14:05.974 "name": null, 00:14:05.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.974 "is_configured": false, 00:14:05.974 "data_offset": 0, 00:14:05.974 "data_size": 65536 00:14:05.974 }, 00:14:05.974 { 00:14:05.974 "name": "BaseBdev3", 00:14:05.974 "uuid": "4a9fb33c-5405-5015-99a2-d0244e811707", 00:14:05.974 "is_configured": true, 00:14:05.974 "data_offset": 0, 00:14:05.974 "data_size": 65536 00:14:05.974 }, 00:14:05.974 { 00:14:05.974 "name": "BaseBdev4", 00:14:05.974 "uuid": "af91c031-4c58-57ee-b0d4-a7b0db6d55f1", 00:14:05.974 "is_configured": true, 00:14:05.974 "data_offset": 0, 00:14:05.974 "data_size": 65536 00:14:05.974 } 00:14:05.974 ] 00:14:05.974 }' 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.974 07:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.233 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.233 "name": "raid_bdev1", 00:14:06.233 "uuid": "6490962d-1ff3-46b9-a8b3-81a67dcdd69b", 00:14:06.233 "strip_size_kb": 0, 00:14:06.233 "state": "online", 00:14:06.233 "raid_level": "raid1", 00:14:06.233 "superblock": false, 00:14:06.233 "num_base_bdevs": 4, 00:14:06.233 "num_base_bdevs_discovered": 3, 00:14:06.233 "num_base_bdevs_operational": 3, 00:14:06.233 "base_bdevs_list": [ 00:14:06.233 { 00:14:06.233 "name": "spare", 00:14:06.233 "uuid": "10ee2a65-4c9e-528e-aa9d-d2c068777503", 00:14:06.233 "is_configured": true, 00:14:06.233 "data_offset": 0, 00:14:06.233 "data_size": 65536 00:14:06.233 }, 00:14:06.233 { 00:14:06.233 "name": null, 00:14:06.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.233 "is_configured": false, 00:14:06.233 "data_offset": 0, 00:14:06.233 "data_size": 65536 00:14:06.233 }, 00:14:06.233 { 00:14:06.233 "name": "BaseBdev3", 00:14:06.233 "uuid": "4a9fb33c-5405-5015-99a2-d0244e811707", 00:14:06.233 "is_configured": true, 00:14:06.233 "data_offset": 0, 00:14:06.233 "data_size": 65536 00:14:06.233 }, 00:14:06.233 { 00:14:06.233 "name": "BaseBdev4", 00:14:06.233 "uuid": "af91c031-4c58-57ee-b0d4-a7b0db6d55f1", 00:14:06.233 "is_configured": true, 00:14:06.233 "data_offset": 0, 00:14:06.233 "data_size": 65536 00:14:06.233 } 00:14:06.233 ] 00:14:06.233 }' 00:14:06.233 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.233 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.233 07:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.233 "name": "raid_bdev1", 00:14:06.233 "uuid": "6490962d-1ff3-46b9-a8b3-81a67dcdd69b", 00:14:06.233 "strip_size_kb": 0, 00:14:06.233 "state": "online", 00:14:06.233 "raid_level": "raid1", 00:14:06.233 "superblock": false, 00:14:06.233 "num_base_bdevs": 4, 00:14:06.233 "num_base_bdevs_discovered": 3, 00:14:06.233 "num_base_bdevs_operational": 3, 00:14:06.233 "base_bdevs_list": [ 00:14:06.233 { 00:14:06.233 "name": "spare", 00:14:06.233 "uuid": "10ee2a65-4c9e-528e-aa9d-d2c068777503", 00:14:06.233 "is_configured": true, 00:14:06.233 "data_offset": 0, 00:14:06.233 "data_size": 65536 00:14:06.233 }, 00:14:06.233 { 00:14:06.233 "name": null, 00:14:06.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.233 "is_configured": false, 00:14:06.233 "data_offset": 0, 00:14:06.233 "data_size": 65536 00:14:06.233 }, 00:14:06.233 { 00:14:06.233 "name": "BaseBdev3", 00:14:06.233 "uuid": "4a9fb33c-5405-5015-99a2-d0244e811707", 00:14:06.233 "is_configured": true, 00:14:06.233 "data_offset": 0, 00:14:06.233 "data_size": 65536 00:14:06.233 }, 00:14:06.233 { 00:14:06.233 "name": "BaseBdev4", 00:14:06.233 "uuid": "af91c031-4c58-57ee-b0d4-a7b0db6d55f1", 00:14:06.233 "is_configured": true, 00:14:06.233 "data_offset": 0, 00:14:06.233 "data_size": 65536 00:14:06.233 } 00:14:06.233 ] 00:14:06.233 }' 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.233 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.801 [2024-11-29 07:45:56.455072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.801 [2024-11-29 07:45:56.455176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.801 [2024-11-29 07:45:56.455277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.801 [2024-11-29 07:45:56.455374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.801 [2024-11-29 07:45:56.455420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:06.801 /dev/nbd0 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:06.801 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:06.802 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.802 1+0 records in 00:14:06.802 1+0 records out 00:14:06.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382092 s, 10.7 MB/s 00:14:06.802 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.061 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:07.061 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.061 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.061 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:07.061 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.061 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.061 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:07.061 /dev/nbd1 00:14:07.061 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.061 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.061 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:07.061 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:07.061 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.062 1+0 records in 00:14:07.062 1+0 records out 00:14:07.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419783 s, 9.8 MB/s 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.062 07:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:07.321 07:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:07.321 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.321 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:07.321 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.321 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:07.321 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.321 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:07.581 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.581 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.581 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.581 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.581 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.581 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.581 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:07.581 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.581 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.581 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77266 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77266 ']' 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77266 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77266 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.841 killing process with pid 77266 00:14:07.841 Received shutdown signal, test time was about 60.000000 seconds 00:14:07.841 00:14:07.841 Latency(us) 00:14:07.841 [2024-11-29T07:45:57.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.841 [2024-11-29T07:45:57.786Z] =================================================================================================================== 00:14:07.841 [2024-11-29T07:45:57.786Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77266' 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77266 00:14:07.841 [2024-11-29 07:45:57.632875] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.841 07:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77266 00:14:08.411 [2024-11-29 07:45:58.100806] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:09.350 00:14:09.350 real 0m16.863s 00:14:09.350 user 0m19.063s 00:14:09.350 sys 0m2.949s 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.350 ************************************ 00:14:09.350 END TEST raid_rebuild_test 00:14:09.350 ************************************ 00:14:09.350 07:45:59 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:09.350 07:45:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:09.350 07:45:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.350 07:45:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.350 ************************************ 00:14:09.350 START TEST raid_rebuild_test_sb 00:14:09.350 ************************************ 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77701 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77701 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77701 ']' 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.350 07:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.609 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:09.609 Zero copy mechanism will not be used. 00:14:09.609 [2024-11-29 07:45:59.330436] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:09.609 [2024-11-29 07:45:59.330618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77701 ] 00:14:09.609 [2024-11-29 07:45:59.503553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.869 [2024-11-29 07:45:59.605516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.869 [2024-11-29 07:45:59.793198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.869 [2024-11-29 07:45:59.793291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.439 BaseBdev1_malloc 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.439 [2024-11-29 07:46:00.189872] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:10.439 [2024-11-29 07:46:00.189932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.439 [2024-11-29 07:46:00.189955] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:10.439 [2024-11-29 07:46:00.189966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.439 [2024-11-29 07:46:00.192080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.439 [2024-11-29 07:46:00.192184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:10.439 BaseBdev1 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.439 BaseBdev2_malloc 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.439 [2024-11-29 07:46:00.239792] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:10.439 [2024-11-29 07:46:00.239929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.439 [2024-11-29 07:46:00.239954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:10.439 [2024-11-29 07:46:00.239965] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.439 [2024-11-29 07:46:00.241954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.439 [2024-11-29 07:46:00.241995] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:10.439 BaseBdev2 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.439 BaseBdev3_malloc 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.439 [2024-11-29 07:46:00.321884] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:10.439 [2024-11-29 07:46:00.321934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.439 [2024-11-29 07:46:00.321973] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:10.439 [2024-11-29 07:46:00.321983] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.439 [2024-11-29 07:46:00.323951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.439 [2024-11-29 07:46:00.324056] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:10.439 BaseBdev3 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.439 BaseBdev4_malloc 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.439 [2024-11-29 07:46:00.373987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:10.439 [2024-11-29 07:46:00.374057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.439 [2024-11-29 07:46:00.374077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:10.439 [2024-11-29 07:46:00.374087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.439 [2024-11-29 07:46:00.376045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.439 [2024-11-29 07:46:00.376086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:10.439 BaseBdev4 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.439 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.699 spare_malloc 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.699 spare_delay 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.699 [2024-11-29 07:46:00.440327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:10.699 [2024-11-29 07:46:00.440374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.699 [2024-11-29 07:46:00.440390] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:10.699 [2024-11-29 07:46:00.440399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.699 [2024-11-29 07:46:00.442373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.699 [2024-11-29 07:46:00.442412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:10.699 spare 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.699 [2024-11-29 07:46:00.452351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.699 [2024-11-29 07:46:00.454075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.699 [2024-11-29 07:46:00.454154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.699 [2024-11-29 07:46:00.454208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:10.699 [2024-11-29 07:46:00.454385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:10.699 [2024-11-29 07:46:00.454401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:10.699 [2024-11-29 07:46:00.454645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:10.699 [2024-11-29 07:46:00.454817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:10.699 [2024-11-29 07:46:00.454828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:10.699 [2024-11-29 07:46:00.454965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.699 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.699 "name": "raid_bdev1", 00:14:10.699 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:10.699 "strip_size_kb": 0, 00:14:10.699 "state": "online", 00:14:10.699 "raid_level": "raid1", 00:14:10.699 "superblock": true, 00:14:10.699 "num_base_bdevs": 4, 00:14:10.699 "num_base_bdevs_discovered": 4, 00:14:10.699 "num_base_bdevs_operational": 4, 00:14:10.699 "base_bdevs_list": [ 00:14:10.699 { 00:14:10.699 "name": "BaseBdev1", 00:14:10.699 "uuid": "50b63dc0-a852-5dce-8373-c1fcfd7ec3a1", 00:14:10.699 "is_configured": true, 00:14:10.699 "data_offset": 2048, 00:14:10.699 "data_size": 63488 00:14:10.699 }, 00:14:10.699 { 00:14:10.699 "name": "BaseBdev2", 00:14:10.699 "uuid": "8d4fa93f-121d-54df-a8de-789e9fae43ac", 00:14:10.699 "is_configured": true, 00:14:10.699 "data_offset": 2048, 00:14:10.699 "data_size": 63488 00:14:10.699 }, 00:14:10.699 { 00:14:10.699 "name": "BaseBdev3", 00:14:10.700 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:10.700 "is_configured": true, 00:14:10.700 "data_offset": 2048, 00:14:10.700 "data_size": 63488 00:14:10.700 }, 00:14:10.700 { 00:14:10.700 "name": "BaseBdev4", 00:14:10.700 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:10.700 "is_configured": true, 00:14:10.700 "data_offset": 2048, 00:14:10.700 "data_size": 63488 00:14:10.700 } 00:14:10.700 ] 00:14:10.700 }' 00:14:10.700 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.700 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.269 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:11.269 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:11.269 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.269 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.269 [2024-11-29 07:46:00.931915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.269 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.270 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:11.270 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.270 07:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:11.270 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.270 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.270 07:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.270 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:11.270 [2024-11-29 07:46:01.183212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:11.270 /dev/nbd0 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.530 1+0 records in 00:14:11.530 1+0 records out 00:14:11.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351218 s, 11.7 MB/s 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:11.530 07:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:16.808 63488+0 records in 00:14:16.808 63488+0 records out 00:14:16.809 32505856 bytes (33 MB, 31 MiB) copied, 4.87219 s, 6.7 MB/s 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:16.809 [2024-11-29 07:46:06.320690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.809 [2024-11-29 07:46:06.356697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.809 "name": "raid_bdev1", 00:14:16.809 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:16.809 "strip_size_kb": 0, 00:14:16.809 "state": "online", 00:14:16.809 "raid_level": "raid1", 00:14:16.809 "superblock": true, 00:14:16.809 "num_base_bdevs": 4, 00:14:16.809 "num_base_bdevs_discovered": 3, 00:14:16.809 "num_base_bdevs_operational": 3, 00:14:16.809 "base_bdevs_list": [ 00:14:16.809 { 00:14:16.809 "name": null, 00:14:16.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.809 "is_configured": false, 00:14:16.809 "data_offset": 0, 00:14:16.809 "data_size": 63488 00:14:16.809 }, 00:14:16.809 { 00:14:16.809 "name": "BaseBdev2", 00:14:16.809 "uuid": "8d4fa93f-121d-54df-a8de-789e9fae43ac", 00:14:16.809 "is_configured": true, 00:14:16.809 "data_offset": 2048, 00:14:16.809 "data_size": 63488 00:14:16.809 }, 00:14:16.809 { 00:14:16.809 "name": "BaseBdev3", 00:14:16.809 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:16.809 "is_configured": true, 00:14:16.809 "data_offset": 2048, 00:14:16.809 "data_size": 63488 00:14:16.809 }, 00:14:16.809 { 00:14:16.809 "name": "BaseBdev4", 00:14:16.809 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:16.809 "is_configured": true, 00:14:16.809 "data_offset": 2048, 00:14:16.809 "data_size": 63488 00:14:16.809 } 00:14:16.809 ] 00:14:16.809 }' 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.809 07:46:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.070 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:17.070 07:46:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.070 07:46:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.070 [2024-11-29 07:46:06.815950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:17.070 [2024-11-29 07:46:06.830505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:17.070 07:46:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.070 07:46:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:17.070 [2024-11-29 07:46:06.832309] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.022 "name": "raid_bdev1", 00:14:18.022 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:18.022 "strip_size_kb": 0, 00:14:18.022 "state": "online", 00:14:18.022 "raid_level": "raid1", 00:14:18.022 "superblock": true, 00:14:18.022 "num_base_bdevs": 4, 00:14:18.022 "num_base_bdevs_discovered": 4, 00:14:18.022 "num_base_bdevs_operational": 4, 00:14:18.022 "process": { 00:14:18.022 "type": "rebuild", 00:14:18.022 "target": "spare", 00:14:18.022 "progress": { 00:14:18.022 "blocks": 20480, 00:14:18.022 "percent": 32 00:14:18.022 } 00:14:18.022 }, 00:14:18.022 "base_bdevs_list": [ 00:14:18.022 { 00:14:18.022 "name": "spare", 00:14:18.022 "uuid": "268b3ea3-49d8-5ab3-bfc6-c437e135c43e", 00:14:18.022 "is_configured": true, 00:14:18.022 "data_offset": 2048, 00:14:18.022 "data_size": 63488 00:14:18.022 }, 00:14:18.022 { 00:14:18.022 "name": "BaseBdev2", 00:14:18.022 "uuid": "8d4fa93f-121d-54df-a8de-789e9fae43ac", 00:14:18.022 "is_configured": true, 00:14:18.022 "data_offset": 2048, 00:14:18.022 "data_size": 63488 00:14:18.022 }, 00:14:18.022 { 00:14:18.022 "name": "BaseBdev3", 00:14:18.022 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:18.022 "is_configured": true, 00:14:18.022 "data_offset": 2048, 00:14:18.022 "data_size": 63488 00:14:18.022 }, 00:14:18.022 { 00:14:18.022 "name": "BaseBdev4", 00:14:18.022 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:18.022 "is_configured": true, 00:14:18.022 "data_offset": 2048, 00:14:18.022 "data_size": 63488 00:14:18.022 } 00:14:18.022 ] 00:14:18.022 }' 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.022 07:46:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.282 [2024-11-29 07:46:07.967950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.282 [2024-11-29 07:46:08.037116] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:18.282 [2024-11-29 07:46:08.037176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.282 [2024-11-29 07:46:08.037193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.282 [2024-11-29 07:46:08.037202] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.282 "name": "raid_bdev1", 00:14:18.282 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:18.282 "strip_size_kb": 0, 00:14:18.282 "state": "online", 00:14:18.282 "raid_level": "raid1", 00:14:18.282 "superblock": true, 00:14:18.282 "num_base_bdevs": 4, 00:14:18.282 "num_base_bdevs_discovered": 3, 00:14:18.282 "num_base_bdevs_operational": 3, 00:14:18.282 "base_bdevs_list": [ 00:14:18.282 { 00:14:18.282 "name": null, 00:14:18.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.282 "is_configured": false, 00:14:18.282 "data_offset": 0, 00:14:18.282 "data_size": 63488 00:14:18.282 }, 00:14:18.282 { 00:14:18.282 "name": "BaseBdev2", 00:14:18.282 "uuid": "8d4fa93f-121d-54df-a8de-789e9fae43ac", 00:14:18.282 "is_configured": true, 00:14:18.282 "data_offset": 2048, 00:14:18.282 "data_size": 63488 00:14:18.282 }, 00:14:18.282 { 00:14:18.282 "name": "BaseBdev3", 00:14:18.282 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:18.282 "is_configured": true, 00:14:18.282 "data_offset": 2048, 00:14:18.282 "data_size": 63488 00:14:18.282 }, 00:14:18.282 { 00:14:18.282 "name": "BaseBdev4", 00:14:18.282 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:18.282 "is_configured": true, 00:14:18.282 "data_offset": 2048, 00:14:18.282 "data_size": 63488 00:14:18.282 } 00:14:18.282 ] 00:14:18.282 }' 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.282 07:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.852 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.852 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.852 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.852 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.852 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.852 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.852 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.852 07:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.852 07:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.852 07:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.852 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.852 "name": "raid_bdev1", 00:14:18.852 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:18.852 "strip_size_kb": 0, 00:14:18.852 "state": "online", 00:14:18.852 "raid_level": "raid1", 00:14:18.852 "superblock": true, 00:14:18.852 "num_base_bdevs": 4, 00:14:18.852 "num_base_bdevs_discovered": 3, 00:14:18.852 "num_base_bdevs_operational": 3, 00:14:18.852 "base_bdevs_list": [ 00:14:18.852 { 00:14:18.852 "name": null, 00:14:18.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.852 "is_configured": false, 00:14:18.852 "data_offset": 0, 00:14:18.852 "data_size": 63488 00:14:18.852 }, 00:14:18.852 { 00:14:18.852 "name": "BaseBdev2", 00:14:18.852 "uuid": "8d4fa93f-121d-54df-a8de-789e9fae43ac", 00:14:18.853 "is_configured": true, 00:14:18.853 "data_offset": 2048, 00:14:18.853 "data_size": 63488 00:14:18.853 }, 00:14:18.853 { 00:14:18.853 "name": "BaseBdev3", 00:14:18.853 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:18.853 "is_configured": true, 00:14:18.853 "data_offset": 2048, 00:14:18.853 "data_size": 63488 00:14:18.853 }, 00:14:18.853 { 00:14:18.853 "name": "BaseBdev4", 00:14:18.853 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:18.853 "is_configured": true, 00:14:18.853 "data_offset": 2048, 00:14:18.853 "data_size": 63488 00:14:18.853 } 00:14:18.853 ] 00:14:18.853 }' 00:14:18.853 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.853 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.853 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.853 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.853 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:18.853 07:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.853 07:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.853 [2024-11-29 07:46:08.665430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.853 [2024-11-29 07:46:08.679347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:18.853 07:46:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.853 07:46:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:18.853 [2024-11-29 07:46:08.681167] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.793 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.793 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.793 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.793 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.793 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.793 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.793 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.793 07:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.793 07:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.793 07:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.053 "name": "raid_bdev1", 00:14:20.053 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:20.053 "strip_size_kb": 0, 00:14:20.053 "state": "online", 00:14:20.053 "raid_level": "raid1", 00:14:20.053 "superblock": true, 00:14:20.053 "num_base_bdevs": 4, 00:14:20.053 "num_base_bdevs_discovered": 4, 00:14:20.053 "num_base_bdevs_operational": 4, 00:14:20.053 "process": { 00:14:20.053 "type": "rebuild", 00:14:20.053 "target": "spare", 00:14:20.053 "progress": { 00:14:20.053 "blocks": 20480, 00:14:20.053 "percent": 32 00:14:20.053 } 00:14:20.053 }, 00:14:20.053 "base_bdevs_list": [ 00:14:20.053 { 00:14:20.053 "name": "spare", 00:14:20.053 "uuid": "268b3ea3-49d8-5ab3-bfc6-c437e135c43e", 00:14:20.053 "is_configured": true, 00:14:20.053 "data_offset": 2048, 00:14:20.053 "data_size": 63488 00:14:20.053 }, 00:14:20.053 { 00:14:20.053 "name": "BaseBdev2", 00:14:20.053 "uuid": "8d4fa93f-121d-54df-a8de-789e9fae43ac", 00:14:20.053 "is_configured": true, 00:14:20.053 "data_offset": 2048, 00:14:20.053 "data_size": 63488 00:14:20.053 }, 00:14:20.053 { 00:14:20.053 "name": "BaseBdev3", 00:14:20.053 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:20.053 "is_configured": true, 00:14:20.053 "data_offset": 2048, 00:14:20.053 "data_size": 63488 00:14:20.053 }, 00:14:20.053 { 00:14:20.053 "name": "BaseBdev4", 00:14:20.053 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:20.053 "is_configured": true, 00:14:20.053 "data_offset": 2048, 00:14:20.053 "data_size": 63488 00:14:20.053 } 00:14:20.053 ] 00:14:20.053 }' 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:20.053 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.053 [2024-11-29 07:46:09.821543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.053 [2024-11-29 07:46:09.985800] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.053 07:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.313 07:46:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.313 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.313 "name": "raid_bdev1", 00:14:20.313 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:20.313 "strip_size_kb": 0, 00:14:20.313 "state": "online", 00:14:20.313 "raid_level": "raid1", 00:14:20.313 "superblock": true, 00:14:20.313 "num_base_bdevs": 4, 00:14:20.313 "num_base_bdevs_discovered": 3, 00:14:20.313 "num_base_bdevs_operational": 3, 00:14:20.313 "process": { 00:14:20.313 "type": "rebuild", 00:14:20.313 "target": "spare", 00:14:20.313 "progress": { 00:14:20.313 "blocks": 24576, 00:14:20.313 "percent": 38 00:14:20.313 } 00:14:20.313 }, 00:14:20.313 "base_bdevs_list": [ 00:14:20.313 { 00:14:20.313 "name": "spare", 00:14:20.313 "uuid": "268b3ea3-49d8-5ab3-bfc6-c437e135c43e", 00:14:20.313 "is_configured": true, 00:14:20.313 "data_offset": 2048, 00:14:20.313 "data_size": 63488 00:14:20.313 }, 00:14:20.313 { 00:14:20.313 "name": null, 00:14:20.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.313 "is_configured": false, 00:14:20.313 "data_offset": 0, 00:14:20.313 "data_size": 63488 00:14:20.313 }, 00:14:20.313 { 00:14:20.313 "name": "BaseBdev3", 00:14:20.313 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:20.313 "is_configured": true, 00:14:20.313 "data_offset": 2048, 00:14:20.313 "data_size": 63488 00:14:20.313 }, 00:14:20.313 { 00:14:20.313 "name": "BaseBdev4", 00:14:20.313 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:20.313 "is_configured": true, 00:14:20.313 "data_offset": 2048, 00:14:20.313 "data_size": 63488 00:14:20.313 } 00:14:20.313 ] 00:14:20.313 }' 00:14:20.313 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.313 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.313 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.313 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.313 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=449 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.314 "name": "raid_bdev1", 00:14:20.314 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:20.314 "strip_size_kb": 0, 00:14:20.314 "state": "online", 00:14:20.314 "raid_level": "raid1", 00:14:20.314 "superblock": true, 00:14:20.314 "num_base_bdevs": 4, 00:14:20.314 "num_base_bdevs_discovered": 3, 00:14:20.314 "num_base_bdevs_operational": 3, 00:14:20.314 "process": { 00:14:20.314 "type": "rebuild", 00:14:20.314 "target": "spare", 00:14:20.314 "progress": { 00:14:20.314 "blocks": 26624, 00:14:20.314 "percent": 41 00:14:20.314 } 00:14:20.314 }, 00:14:20.314 "base_bdevs_list": [ 00:14:20.314 { 00:14:20.314 "name": "spare", 00:14:20.314 "uuid": "268b3ea3-49d8-5ab3-bfc6-c437e135c43e", 00:14:20.314 "is_configured": true, 00:14:20.314 "data_offset": 2048, 00:14:20.314 "data_size": 63488 00:14:20.314 }, 00:14:20.314 { 00:14:20.314 "name": null, 00:14:20.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.314 "is_configured": false, 00:14:20.314 "data_offset": 0, 00:14:20.314 "data_size": 63488 00:14:20.314 }, 00:14:20.314 { 00:14:20.314 "name": "BaseBdev3", 00:14:20.314 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:20.314 "is_configured": true, 00:14:20.314 "data_offset": 2048, 00:14:20.314 "data_size": 63488 00:14:20.314 }, 00:14:20.314 { 00:14:20.314 "name": "BaseBdev4", 00:14:20.314 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:20.314 "is_configured": true, 00:14:20.314 "data_offset": 2048, 00:14:20.314 "data_size": 63488 00:14:20.314 } 00:14:20.314 ] 00:14:20.314 }' 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.314 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.573 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.574 07:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.515 "name": "raid_bdev1", 00:14:21.515 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:21.515 "strip_size_kb": 0, 00:14:21.515 "state": "online", 00:14:21.515 "raid_level": "raid1", 00:14:21.515 "superblock": true, 00:14:21.515 "num_base_bdevs": 4, 00:14:21.515 "num_base_bdevs_discovered": 3, 00:14:21.515 "num_base_bdevs_operational": 3, 00:14:21.515 "process": { 00:14:21.515 "type": "rebuild", 00:14:21.515 "target": "spare", 00:14:21.515 "progress": { 00:14:21.515 "blocks": 49152, 00:14:21.515 "percent": 77 00:14:21.515 } 00:14:21.515 }, 00:14:21.515 "base_bdevs_list": [ 00:14:21.515 { 00:14:21.515 "name": "spare", 00:14:21.515 "uuid": "268b3ea3-49d8-5ab3-bfc6-c437e135c43e", 00:14:21.515 "is_configured": true, 00:14:21.515 "data_offset": 2048, 00:14:21.515 "data_size": 63488 00:14:21.515 }, 00:14:21.515 { 00:14:21.515 "name": null, 00:14:21.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.515 "is_configured": false, 00:14:21.515 "data_offset": 0, 00:14:21.515 "data_size": 63488 00:14:21.515 }, 00:14:21.515 { 00:14:21.515 "name": "BaseBdev3", 00:14:21.515 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:21.515 "is_configured": true, 00:14:21.515 "data_offset": 2048, 00:14:21.515 "data_size": 63488 00:14:21.515 }, 00:14:21.515 { 00:14:21.515 "name": "BaseBdev4", 00:14:21.515 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:21.515 "is_configured": true, 00:14:21.515 "data_offset": 2048, 00:14:21.515 "data_size": 63488 00:14:21.515 } 00:14:21.515 ] 00:14:21.515 }' 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.515 07:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.085 [2024-11-29 07:46:11.893379] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:22.085 [2024-11-29 07:46:11.893442] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:22.085 [2024-11-29 07:46:11.893542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.655 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.655 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.655 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.655 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.655 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.655 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.655 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.655 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.655 07:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.655 07:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.655 07:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.655 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.655 "name": "raid_bdev1", 00:14:22.655 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:22.655 "strip_size_kb": 0, 00:14:22.655 "state": "online", 00:14:22.655 "raid_level": "raid1", 00:14:22.655 "superblock": true, 00:14:22.655 "num_base_bdevs": 4, 00:14:22.655 "num_base_bdevs_discovered": 3, 00:14:22.655 "num_base_bdevs_operational": 3, 00:14:22.655 "base_bdevs_list": [ 00:14:22.655 { 00:14:22.655 "name": "spare", 00:14:22.655 "uuid": "268b3ea3-49d8-5ab3-bfc6-c437e135c43e", 00:14:22.655 "is_configured": true, 00:14:22.655 "data_offset": 2048, 00:14:22.655 "data_size": 63488 00:14:22.655 }, 00:14:22.655 { 00:14:22.655 "name": null, 00:14:22.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.655 "is_configured": false, 00:14:22.655 "data_offset": 0, 00:14:22.655 "data_size": 63488 00:14:22.655 }, 00:14:22.655 { 00:14:22.655 "name": "BaseBdev3", 00:14:22.655 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:22.655 "is_configured": true, 00:14:22.655 "data_offset": 2048, 00:14:22.655 "data_size": 63488 00:14:22.655 }, 00:14:22.655 { 00:14:22.655 "name": "BaseBdev4", 00:14:22.655 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:22.655 "is_configured": true, 00:14:22.656 "data_offset": 2048, 00:14:22.656 "data_size": 63488 00:14:22.656 } 00:14:22.656 ] 00:14:22.656 }' 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.656 "name": "raid_bdev1", 00:14:22.656 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:22.656 "strip_size_kb": 0, 00:14:22.656 "state": "online", 00:14:22.656 "raid_level": "raid1", 00:14:22.656 "superblock": true, 00:14:22.656 "num_base_bdevs": 4, 00:14:22.656 "num_base_bdevs_discovered": 3, 00:14:22.656 "num_base_bdevs_operational": 3, 00:14:22.656 "base_bdevs_list": [ 00:14:22.656 { 00:14:22.656 "name": "spare", 00:14:22.656 "uuid": "268b3ea3-49d8-5ab3-bfc6-c437e135c43e", 00:14:22.656 "is_configured": true, 00:14:22.656 "data_offset": 2048, 00:14:22.656 "data_size": 63488 00:14:22.656 }, 00:14:22.656 { 00:14:22.656 "name": null, 00:14:22.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.656 "is_configured": false, 00:14:22.656 "data_offset": 0, 00:14:22.656 "data_size": 63488 00:14:22.656 }, 00:14:22.656 { 00:14:22.656 "name": "BaseBdev3", 00:14:22.656 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:22.656 "is_configured": true, 00:14:22.656 "data_offset": 2048, 00:14:22.656 "data_size": 63488 00:14:22.656 }, 00:14:22.656 { 00:14:22.656 "name": "BaseBdev4", 00:14:22.656 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:22.656 "is_configured": true, 00:14:22.656 "data_offset": 2048, 00:14:22.656 "data_size": 63488 00:14:22.656 } 00:14:22.656 ] 00:14:22.656 }' 00:14:22.656 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.916 "name": "raid_bdev1", 00:14:22.916 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:22.916 "strip_size_kb": 0, 00:14:22.916 "state": "online", 00:14:22.916 "raid_level": "raid1", 00:14:22.916 "superblock": true, 00:14:22.916 "num_base_bdevs": 4, 00:14:22.916 "num_base_bdevs_discovered": 3, 00:14:22.916 "num_base_bdevs_operational": 3, 00:14:22.916 "base_bdevs_list": [ 00:14:22.916 { 00:14:22.916 "name": "spare", 00:14:22.916 "uuid": "268b3ea3-49d8-5ab3-bfc6-c437e135c43e", 00:14:22.916 "is_configured": true, 00:14:22.916 "data_offset": 2048, 00:14:22.916 "data_size": 63488 00:14:22.916 }, 00:14:22.916 { 00:14:22.916 "name": null, 00:14:22.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.916 "is_configured": false, 00:14:22.916 "data_offset": 0, 00:14:22.916 "data_size": 63488 00:14:22.916 }, 00:14:22.916 { 00:14:22.916 "name": "BaseBdev3", 00:14:22.916 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:22.916 "is_configured": true, 00:14:22.916 "data_offset": 2048, 00:14:22.916 "data_size": 63488 00:14:22.916 }, 00:14:22.916 { 00:14:22.916 "name": "BaseBdev4", 00:14:22.916 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:22.916 "is_configured": true, 00:14:22.916 "data_offset": 2048, 00:14:22.916 "data_size": 63488 00:14:22.916 } 00:14:22.916 ] 00:14:22.916 }' 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.916 07:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.176 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:23.176 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.176 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.176 [2024-11-29 07:46:13.088083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:23.176 [2024-11-29 07:46:13.088165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.176 [2024-11-29 07:46:13.088266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.176 [2024-11-29 07:46:13.088379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.176 [2024-11-29 07:46:13.088428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:23.176 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.176 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:23.176 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.176 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.176 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.176 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:23.435 /dev/nbd0 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.435 1+0 records in 00:14:23.435 1+0 records out 00:14:23.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367196 s, 11.2 MB/s 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:23.435 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:23.694 /dev/nbd1 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.694 1+0 records in 00:14:23.694 1+0 records out 00:14:23.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044337 s, 9.2 MB/s 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:23.694 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:23.953 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:23.953 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.953 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:23.953 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.953 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:23.953 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.953 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:24.213 07:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:24.213 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:24.213 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:24.213 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.213 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.213 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:24.213 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:24.213 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.213 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.213 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.473 [2024-11-29 07:46:14.232236] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:24.473 [2024-11-29 07:46:14.232289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.473 [2024-11-29 07:46:14.232312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:24.473 [2024-11-29 07:46:14.232321] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.473 [2024-11-29 07:46:14.234515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.473 [2024-11-29 07:46:14.234555] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:24.473 [2024-11-29 07:46:14.234647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:24.473 [2024-11-29 07:46:14.234702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.473 [2024-11-29 07:46:14.234854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.473 [2024-11-29 07:46:14.234935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:24.473 spare 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.473 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.473 [2024-11-29 07:46:14.334824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:24.473 [2024-11-29 07:46:14.334849] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:24.474 [2024-11-29 07:46:14.335146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:24.474 [2024-11-29 07:46:14.335355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:24.474 [2024-11-29 07:46:14.335368] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:24.474 [2024-11-29 07:46:14.335536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.474 "name": "raid_bdev1", 00:14:24.474 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:24.474 "strip_size_kb": 0, 00:14:24.474 "state": "online", 00:14:24.474 "raid_level": "raid1", 00:14:24.474 "superblock": true, 00:14:24.474 "num_base_bdevs": 4, 00:14:24.474 "num_base_bdevs_discovered": 3, 00:14:24.474 "num_base_bdevs_operational": 3, 00:14:24.474 "base_bdevs_list": [ 00:14:24.474 { 00:14:24.474 "name": "spare", 00:14:24.474 "uuid": "268b3ea3-49d8-5ab3-bfc6-c437e135c43e", 00:14:24.474 "is_configured": true, 00:14:24.474 "data_offset": 2048, 00:14:24.474 "data_size": 63488 00:14:24.474 }, 00:14:24.474 { 00:14:24.474 "name": null, 00:14:24.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.474 "is_configured": false, 00:14:24.474 "data_offset": 2048, 00:14:24.474 "data_size": 63488 00:14:24.474 }, 00:14:24.474 { 00:14:24.474 "name": "BaseBdev3", 00:14:24.474 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:24.474 "is_configured": true, 00:14:24.474 "data_offset": 2048, 00:14:24.474 "data_size": 63488 00:14:24.474 }, 00:14:24.474 { 00:14:24.474 "name": "BaseBdev4", 00:14:24.474 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:24.474 "is_configured": true, 00:14:24.474 "data_offset": 2048, 00:14:24.474 "data_size": 63488 00:14:24.474 } 00:14:24.474 ] 00:14:24.474 }' 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.474 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.043 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.043 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.043 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.043 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.043 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.043 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.043 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.043 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.043 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.043 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.043 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.043 "name": "raid_bdev1", 00:14:25.043 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:25.043 "strip_size_kb": 0, 00:14:25.043 "state": "online", 00:14:25.043 "raid_level": "raid1", 00:14:25.043 "superblock": true, 00:14:25.043 "num_base_bdevs": 4, 00:14:25.043 "num_base_bdevs_discovered": 3, 00:14:25.043 "num_base_bdevs_operational": 3, 00:14:25.043 "base_bdevs_list": [ 00:14:25.043 { 00:14:25.043 "name": "spare", 00:14:25.043 "uuid": "268b3ea3-49d8-5ab3-bfc6-c437e135c43e", 00:14:25.043 "is_configured": true, 00:14:25.043 "data_offset": 2048, 00:14:25.043 "data_size": 63488 00:14:25.043 }, 00:14:25.043 { 00:14:25.043 "name": null, 00:14:25.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.043 "is_configured": false, 00:14:25.043 "data_offset": 2048, 00:14:25.043 "data_size": 63488 00:14:25.043 }, 00:14:25.043 { 00:14:25.043 "name": "BaseBdev3", 00:14:25.043 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:25.043 "is_configured": true, 00:14:25.044 "data_offset": 2048, 00:14:25.044 "data_size": 63488 00:14:25.044 }, 00:14:25.044 { 00:14:25.044 "name": "BaseBdev4", 00:14:25.044 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:25.044 "is_configured": true, 00:14:25.044 "data_offset": 2048, 00:14:25.044 "data_size": 63488 00:14:25.044 } 00:14:25.044 ] 00:14:25.044 }' 00:14:25.044 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.044 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.044 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.044 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.044 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:25.044 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.044 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.044 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.044 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.304 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.304 07:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:25.304 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.304 07:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.304 [2024-11-29 07:46:15.007052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.304 "name": "raid_bdev1", 00:14:25.304 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:25.304 "strip_size_kb": 0, 00:14:25.304 "state": "online", 00:14:25.304 "raid_level": "raid1", 00:14:25.304 "superblock": true, 00:14:25.304 "num_base_bdevs": 4, 00:14:25.304 "num_base_bdevs_discovered": 2, 00:14:25.304 "num_base_bdevs_operational": 2, 00:14:25.304 "base_bdevs_list": [ 00:14:25.304 { 00:14:25.304 "name": null, 00:14:25.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.304 "is_configured": false, 00:14:25.304 "data_offset": 0, 00:14:25.304 "data_size": 63488 00:14:25.304 }, 00:14:25.304 { 00:14:25.304 "name": null, 00:14:25.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.304 "is_configured": false, 00:14:25.304 "data_offset": 2048, 00:14:25.304 "data_size": 63488 00:14:25.304 }, 00:14:25.304 { 00:14:25.304 "name": "BaseBdev3", 00:14:25.304 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:25.304 "is_configured": true, 00:14:25.304 "data_offset": 2048, 00:14:25.304 "data_size": 63488 00:14:25.304 }, 00:14:25.304 { 00:14:25.304 "name": "BaseBdev4", 00:14:25.304 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:25.304 "is_configured": true, 00:14:25.304 "data_offset": 2048, 00:14:25.304 "data_size": 63488 00:14:25.304 } 00:14:25.304 ] 00:14:25.304 }' 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.304 07:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.563 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.563 07:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.563 07:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.563 [2024-11-29 07:46:15.490379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.563 [2024-11-29 07:46:15.490625] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:25.563 [2024-11-29 07:46:15.490692] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:25.563 [2024-11-29 07:46:15.490782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.563 [2024-11-29 07:46:15.505149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:25.563 07:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.563 07:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:25.563 [2024-11-29 07:46:15.507030] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.944 "name": "raid_bdev1", 00:14:26.944 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:26.944 "strip_size_kb": 0, 00:14:26.944 "state": "online", 00:14:26.944 "raid_level": "raid1", 00:14:26.944 "superblock": true, 00:14:26.944 "num_base_bdevs": 4, 00:14:26.944 "num_base_bdevs_discovered": 3, 00:14:26.944 "num_base_bdevs_operational": 3, 00:14:26.944 "process": { 00:14:26.944 "type": "rebuild", 00:14:26.944 "target": "spare", 00:14:26.944 "progress": { 00:14:26.944 "blocks": 20480, 00:14:26.944 "percent": 32 00:14:26.944 } 00:14:26.944 }, 00:14:26.944 "base_bdevs_list": [ 00:14:26.944 { 00:14:26.944 "name": "spare", 00:14:26.944 "uuid": "268b3ea3-49d8-5ab3-bfc6-c437e135c43e", 00:14:26.944 "is_configured": true, 00:14:26.944 "data_offset": 2048, 00:14:26.944 "data_size": 63488 00:14:26.944 }, 00:14:26.944 { 00:14:26.944 "name": null, 00:14:26.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.944 "is_configured": false, 00:14:26.944 "data_offset": 2048, 00:14:26.944 "data_size": 63488 00:14:26.944 }, 00:14:26.944 { 00:14:26.944 "name": "BaseBdev3", 00:14:26.944 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:26.944 "is_configured": true, 00:14:26.944 "data_offset": 2048, 00:14:26.944 "data_size": 63488 00:14:26.944 }, 00:14:26.944 { 00:14:26.944 "name": "BaseBdev4", 00:14:26.944 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:26.944 "is_configured": true, 00:14:26.944 "data_offset": 2048, 00:14:26.944 "data_size": 63488 00:14:26.944 } 00:14:26.944 ] 00:14:26.944 }' 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.944 [2024-11-29 07:46:16.662322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.944 [2024-11-29 07:46:16.711824] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:26.944 [2024-11-29 07:46:16.711874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.944 [2024-11-29 07:46:16.711891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.944 [2024-11-29 07:46:16.711897] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.944 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.944 "name": "raid_bdev1", 00:14:26.944 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:26.944 "strip_size_kb": 0, 00:14:26.944 "state": "online", 00:14:26.944 "raid_level": "raid1", 00:14:26.944 "superblock": true, 00:14:26.944 "num_base_bdevs": 4, 00:14:26.944 "num_base_bdevs_discovered": 2, 00:14:26.944 "num_base_bdevs_operational": 2, 00:14:26.944 "base_bdevs_list": [ 00:14:26.944 { 00:14:26.945 "name": null, 00:14:26.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.945 "is_configured": false, 00:14:26.945 "data_offset": 0, 00:14:26.945 "data_size": 63488 00:14:26.945 }, 00:14:26.945 { 00:14:26.945 "name": null, 00:14:26.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.945 "is_configured": false, 00:14:26.945 "data_offset": 2048, 00:14:26.945 "data_size": 63488 00:14:26.945 }, 00:14:26.945 { 00:14:26.945 "name": "BaseBdev3", 00:14:26.945 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:26.945 "is_configured": true, 00:14:26.945 "data_offset": 2048, 00:14:26.945 "data_size": 63488 00:14:26.945 }, 00:14:26.945 { 00:14:26.945 "name": "BaseBdev4", 00:14:26.945 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:26.945 "is_configured": true, 00:14:26.945 "data_offset": 2048, 00:14:26.945 "data_size": 63488 00:14:26.945 } 00:14:26.945 ] 00:14:26.945 }' 00:14:26.945 07:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.945 07:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.514 07:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:27.514 07:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.514 07:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.514 [2024-11-29 07:46:17.151668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:27.514 [2024-11-29 07:46:17.151782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.514 [2024-11-29 07:46:17.151837] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:27.514 [2024-11-29 07:46:17.151867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.514 [2024-11-29 07:46:17.152363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.514 [2024-11-29 07:46:17.152425] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:27.514 [2024-11-29 07:46:17.152544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:27.514 [2024-11-29 07:46:17.152585] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:27.514 [2024-11-29 07:46:17.152628] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:27.514 [2024-11-29 07:46:17.152696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.514 [2024-11-29 07:46:17.166630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:27.514 spare 00:14:27.514 07:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.514 07:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:27.514 [2024-11-29 07:46:17.168469] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.455 "name": "raid_bdev1", 00:14:28.455 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:28.455 "strip_size_kb": 0, 00:14:28.455 "state": "online", 00:14:28.455 "raid_level": "raid1", 00:14:28.455 "superblock": true, 00:14:28.455 "num_base_bdevs": 4, 00:14:28.455 "num_base_bdevs_discovered": 3, 00:14:28.455 "num_base_bdevs_operational": 3, 00:14:28.455 "process": { 00:14:28.455 "type": "rebuild", 00:14:28.455 "target": "spare", 00:14:28.455 "progress": { 00:14:28.455 "blocks": 20480, 00:14:28.455 "percent": 32 00:14:28.455 } 00:14:28.455 }, 00:14:28.455 "base_bdevs_list": [ 00:14:28.455 { 00:14:28.455 "name": "spare", 00:14:28.455 "uuid": "268b3ea3-49d8-5ab3-bfc6-c437e135c43e", 00:14:28.455 "is_configured": true, 00:14:28.455 "data_offset": 2048, 00:14:28.455 "data_size": 63488 00:14:28.455 }, 00:14:28.455 { 00:14:28.455 "name": null, 00:14:28.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.455 "is_configured": false, 00:14:28.455 "data_offset": 2048, 00:14:28.455 "data_size": 63488 00:14:28.455 }, 00:14:28.455 { 00:14:28.455 "name": "BaseBdev3", 00:14:28.455 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:28.455 "is_configured": true, 00:14:28.455 "data_offset": 2048, 00:14:28.455 "data_size": 63488 00:14:28.455 }, 00:14:28.455 { 00:14:28.455 "name": "BaseBdev4", 00:14:28.455 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:28.455 "is_configured": true, 00:14:28.455 "data_offset": 2048, 00:14:28.455 "data_size": 63488 00:14:28.455 } 00:14:28.455 ] 00:14:28.455 }' 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.455 [2024-11-29 07:46:18.328274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.455 [2024-11-29 07:46:18.373201] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:28.455 [2024-11-29 07:46:18.373253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.455 [2024-11-29 07:46:18.373267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.455 [2024-11-29 07:46:18.373275] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.455 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.456 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.456 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.456 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.456 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.456 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.716 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.716 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.716 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.716 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.716 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.716 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.716 "name": "raid_bdev1", 00:14:28.716 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:28.716 "strip_size_kb": 0, 00:14:28.716 "state": "online", 00:14:28.716 "raid_level": "raid1", 00:14:28.716 "superblock": true, 00:14:28.716 "num_base_bdevs": 4, 00:14:28.716 "num_base_bdevs_discovered": 2, 00:14:28.716 "num_base_bdevs_operational": 2, 00:14:28.716 "base_bdevs_list": [ 00:14:28.716 { 00:14:28.716 "name": null, 00:14:28.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.716 "is_configured": false, 00:14:28.716 "data_offset": 0, 00:14:28.716 "data_size": 63488 00:14:28.716 }, 00:14:28.716 { 00:14:28.716 "name": null, 00:14:28.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.716 "is_configured": false, 00:14:28.716 "data_offset": 2048, 00:14:28.716 "data_size": 63488 00:14:28.716 }, 00:14:28.716 { 00:14:28.716 "name": "BaseBdev3", 00:14:28.716 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:28.716 "is_configured": true, 00:14:28.716 "data_offset": 2048, 00:14:28.716 "data_size": 63488 00:14:28.716 }, 00:14:28.716 { 00:14:28.716 "name": "BaseBdev4", 00:14:28.716 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:28.716 "is_configured": true, 00:14:28.716 "data_offset": 2048, 00:14:28.716 "data_size": 63488 00:14:28.716 } 00:14:28.716 ] 00:14:28.716 }' 00:14:28.716 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.716 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.976 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.976 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.976 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.976 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.976 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.976 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.976 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.976 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.976 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.976 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.976 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.976 "name": "raid_bdev1", 00:14:28.976 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:28.976 "strip_size_kb": 0, 00:14:28.976 "state": "online", 00:14:28.976 "raid_level": "raid1", 00:14:28.976 "superblock": true, 00:14:28.976 "num_base_bdevs": 4, 00:14:28.976 "num_base_bdevs_discovered": 2, 00:14:28.976 "num_base_bdevs_operational": 2, 00:14:28.976 "base_bdevs_list": [ 00:14:28.976 { 00:14:28.976 "name": null, 00:14:28.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.976 "is_configured": false, 00:14:28.976 "data_offset": 0, 00:14:28.976 "data_size": 63488 00:14:28.976 }, 00:14:28.976 { 00:14:28.976 "name": null, 00:14:28.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.976 "is_configured": false, 00:14:28.976 "data_offset": 2048, 00:14:28.976 "data_size": 63488 00:14:28.976 }, 00:14:28.976 { 00:14:28.976 "name": "BaseBdev3", 00:14:28.976 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:28.976 "is_configured": true, 00:14:28.976 "data_offset": 2048, 00:14:28.976 "data_size": 63488 00:14:28.976 }, 00:14:28.976 { 00:14:28.976 "name": "BaseBdev4", 00:14:28.976 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:28.976 "is_configured": true, 00:14:28.976 "data_offset": 2048, 00:14:28.976 "data_size": 63488 00:14:28.976 } 00:14:28.976 ] 00:14:28.976 }' 00:14:28.976 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.237 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:29.237 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.237 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:29.237 07:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:29.237 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.237 07:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.237 07:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.237 07:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:29.237 07:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.237 07:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.237 [2024-11-29 07:46:19.012377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:29.237 [2024-11-29 07:46:19.012430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.237 [2024-11-29 07:46:19.012450] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:29.237 [2024-11-29 07:46:19.012460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.237 [2024-11-29 07:46:19.012899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.237 [2024-11-29 07:46:19.012919] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.237 [2024-11-29 07:46:19.012994] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:29.237 [2024-11-29 07:46:19.013008] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:29.237 [2024-11-29 07:46:19.013015] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:29.237 [2024-11-29 07:46:19.013038] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:29.237 BaseBdev1 00:14:29.237 07:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.237 07:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.176 "name": "raid_bdev1", 00:14:30.176 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:30.176 "strip_size_kb": 0, 00:14:30.176 "state": "online", 00:14:30.176 "raid_level": "raid1", 00:14:30.176 "superblock": true, 00:14:30.176 "num_base_bdevs": 4, 00:14:30.176 "num_base_bdevs_discovered": 2, 00:14:30.176 "num_base_bdevs_operational": 2, 00:14:30.176 "base_bdevs_list": [ 00:14:30.176 { 00:14:30.176 "name": null, 00:14:30.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.176 "is_configured": false, 00:14:30.176 "data_offset": 0, 00:14:30.176 "data_size": 63488 00:14:30.176 }, 00:14:30.176 { 00:14:30.176 "name": null, 00:14:30.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.176 "is_configured": false, 00:14:30.176 "data_offset": 2048, 00:14:30.176 "data_size": 63488 00:14:30.176 }, 00:14:30.176 { 00:14:30.176 "name": "BaseBdev3", 00:14:30.176 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:30.176 "is_configured": true, 00:14:30.176 "data_offset": 2048, 00:14:30.176 "data_size": 63488 00:14:30.176 }, 00:14:30.176 { 00:14:30.176 "name": "BaseBdev4", 00:14:30.176 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:30.176 "is_configured": true, 00:14:30.176 "data_offset": 2048, 00:14:30.176 "data_size": 63488 00:14:30.176 } 00:14:30.176 ] 00:14:30.176 }' 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.176 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.745 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.745 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.745 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.745 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.745 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.745 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.745 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.745 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.745 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.745 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.745 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.745 "name": "raid_bdev1", 00:14:30.745 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:30.745 "strip_size_kb": 0, 00:14:30.745 "state": "online", 00:14:30.745 "raid_level": "raid1", 00:14:30.745 "superblock": true, 00:14:30.745 "num_base_bdevs": 4, 00:14:30.745 "num_base_bdevs_discovered": 2, 00:14:30.745 "num_base_bdevs_operational": 2, 00:14:30.745 "base_bdevs_list": [ 00:14:30.745 { 00:14:30.745 "name": null, 00:14:30.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.745 "is_configured": false, 00:14:30.745 "data_offset": 0, 00:14:30.745 "data_size": 63488 00:14:30.746 }, 00:14:30.746 { 00:14:30.746 "name": null, 00:14:30.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.746 "is_configured": false, 00:14:30.746 "data_offset": 2048, 00:14:30.746 "data_size": 63488 00:14:30.746 }, 00:14:30.746 { 00:14:30.746 "name": "BaseBdev3", 00:14:30.746 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:30.746 "is_configured": true, 00:14:30.746 "data_offset": 2048, 00:14:30.746 "data_size": 63488 00:14:30.746 }, 00:14:30.746 { 00:14:30.746 "name": "BaseBdev4", 00:14:30.746 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:30.746 "is_configured": true, 00:14:30.746 "data_offset": 2048, 00:14:30.746 "data_size": 63488 00:14:30.746 } 00:14:30.746 ] 00:14:30.746 }' 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.746 [2024-11-29 07:46:20.565853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.746 [2024-11-29 07:46:20.566092] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:30.746 [2024-11-29 07:46:20.566121] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:30.746 request: 00:14:30.746 { 00:14:30.746 "base_bdev": "BaseBdev1", 00:14:30.746 "raid_bdev": "raid_bdev1", 00:14:30.746 "method": "bdev_raid_add_base_bdev", 00:14:30.746 "req_id": 1 00:14:30.746 } 00:14:30.746 Got JSON-RPC error response 00:14:30.746 response: 00:14:30.746 { 00:14:30.746 "code": -22, 00:14:30.746 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:30.746 } 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:30.746 07:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.707 "name": "raid_bdev1", 00:14:31.707 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:31.707 "strip_size_kb": 0, 00:14:31.707 "state": "online", 00:14:31.707 "raid_level": "raid1", 00:14:31.707 "superblock": true, 00:14:31.707 "num_base_bdevs": 4, 00:14:31.707 "num_base_bdevs_discovered": 2, 00:14:31.707 "num_base_bdevs_operational": 2, 00:14:31.707 "base_bdevs_list": [ 00:14:31.707 { 00:14:31.707 "name": null, 00:14:31.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.707 "is_configured": false, 00:14:31.707 "data_offset": 0, 00:14:31.707 "data_size": 63488 00:14:31.707 }, 00:14:31.707 { 00:14:31.707 "name": null, 00:14:31.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.707 "is_configured": false, 00:14:31.707 "data_offset": 2048, 00:14:31.707 "data_size": 63488 00:14:31.707 }, 00:14:31.707 { 00:14:31.707 "name": "BaseBdev3", 00:14:31.707 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:31.707 "is_configured": true, 00:14:31.707 "data_offset": 2048, 00:14:31.707 "data_size": 63488 00:14:31.707 }, 00:14:31.707 { 00:14:31.707 "name": "BaseBdev4", 00:14:31.707 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:31.707 "is_configured": true, 00:14:31.707 "data_offset": 2048, 00:14:31.707 "data_size": 63488 00:14:31.707 } 00:14:31.707 ] 00:14:31.707 }' 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.707 07:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.277 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.277 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.277 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.277 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.277 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.277 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.277 07:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.277 07:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.277 07:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.277 "name": "raid_bdev1", 00:14:32.277 "uuid": "a5a86c2a-8fc4-450e-9a89-783aeaa4de10", 00:14:32.277 "strip_size_kb": 0, 00:14:32.277 "state": "online", 00:14:32.277 "raid_level": "raid1", 00:14:32.277 "superblock": true, 00:14:32.277 "num_base_bdevs": 4, 00:14:32.277 "num_base_bdevs_discovered": 2, 00:14:32.277 "num_base_bdevs_operational": 2, 00:14:32.277 "base_bdevs_list": [ 00:14:32.277 { 00:14:32.277 "name": null, 00:14:32.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.277 "is_configured": false, 00:14:32.277 "data_offset": 0, 00:14:32.277 "data_size": 63488 00:14:32.277 }, 00:14:32.277 { 00:14:32.277 "name": null, 00:14:32.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.277 "is_configured": false, 00:14:32.277 "data_offset": 2048, 00:14:32.277 "data_size": 63488 00:14:32.277 }, 00:14:32.277 { 00:14:32.277 "name": "BaseBdev3", 00:14:32.277 "uuid": "716b1310-22bc-5a42-b6cd-a2a0546261e0", 00:14:32.277 "is_configured": true, 00:14:32.277 "data_offset": 2048, 00:14:32.277 "data_size": 63488 00:14:32.277 }, 00:14:32.277 { 00:14:32.277 "name": "BaseBdev4", 00:14:32.277 "uuid": "48a43a0e-63f9-54f1-99de-3da24e98d46c", 00:14:32.277 "is_configured": true, 00:14:32.277 "data_offset": 2048, 00:14:32.277 "data_size": 63488 00:14:32.277 } 00:14:32.277 ] 00:14:32.277 }' 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77701 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77701 ']' 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77701 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77701 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.277 killing process with pid 77701 00:14:32.277 Received shutdown signal, test time was about 60.000000 seconds 00:14:32.277 00:14:32.277 Latency(us) 00:14:32.277 [2024-11-29T07:46:22.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.277 [2024-11-29T07:46:22.222Z] =================================================================================================================== 00:14:32.277 [2024-11-29T07:46:22.222Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77701' 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77701 00:14:32.277 07:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77701 00:14:32.277 [2024-11-29 07:46:22.142595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.277 [2024-11-29 07:46:22.142716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.277 [2024-11-29 07:46:22.142802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.277 [2024-11-29 07:46:22.142811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:32.847 [2024-11-29 07:46:22.604984] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.786 07:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:33.786 00:14:33.786 real 0m24.431s 00:14:33.786 user 0m29.648s 00:14:33.786 sys 0m3.519s 00:14:33.786 07:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.786 07:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.786 ************************************ 00:14:33.786 END TEST raid_rebuild_test_sb 00:14:33.786 ************************************ 00:14:33.786 07:46:23 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:33.786 07:46:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:33.786 07:46:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.786 07:46:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.046 ************************************ 00:14:34.046 START TEST raid_rebuild_test_io 00:14:34.046 ************************************ 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78452 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78452 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78452 ']' 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.046 07:46:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.046 [2024-11-29 07:46:23.829660] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:34.046 [2024-11-29 07:46:23.829858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:34.046 Zero copy mechanism will not be used. 00:14:34.046 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78452 ] 00:14:34.306 [2024-11-29 07:46:23.999886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.306 [2024-11-29 07:46:24.104498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.566 [2024-11-29 07:46:24.290524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.566 [2024-11-29 07:46:24.290644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.825 BaseBdev1_malloc 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.825 [2024-11-29 07:46:24.694821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:34.825 [2024-11-29 07:46:24.694883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.825 [2024-11-29 07:46:24.694920] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:34.825 [2024-11-29 07:46:24.694931] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.825 [2024-11-29 07:46:24.697056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.825 [2024-11-29 07:46:24.697106] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.825 BaseBdev1 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.825 BaseBdev2_malloc 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.825 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.825 [2024-11-29 07:46:24.746210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:34.825 [2024-11-29 07:46:24.746265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.825 [2024-11-29 07:46:24.746286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:34.825 [2024-11-29 07:46:24.746296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.826 [2024-11-29 07:46:24.748315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.826 [2024-11-29 07:46:24.748424] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:34.826 BaseBdev2 00:14:34.826 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.826 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.826 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:34.826 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.826 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.085 BaseBdev3_malloc 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.085 [2024-11-29 07:46:24.812953] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:35.085 [2024-11-29 07:46:24.813004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.085 [2024-11-29 07:46:24.813024] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:35.085 [2024-11-29 07:46:24.813034] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.085 [2024-11-29 07:46:24.815053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.085 [2024-11-29 07:46:24.815092] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:35.085 BaseBdev3 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.085 BaseBdev4_malloc 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.085 [2024-11-29 07:46:24.865409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:35.085 [2024-11-29 07:46:24.865478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.085 [2024-11-29 07:46:24.865498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:35.085 [2024-11-29 07:46:24.865508] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.085 [2024-11-29 07:46:24.867459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.085 [2024-11-29 07:46:24.867499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:35.085 BaseBdev4 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.085 spare_malloc 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.085 spare_delay 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.085 [2024-11-29 07:46:24.931396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:35.085 [2024-11-29 07:46:24.931454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.085 [2024-11-29 07:46:24.931485] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:35.085 [2024-11-29 07:46:24.931494] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.085 [2024-11-29 07:46:24.933508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.085 [2024-11-29 07:46:24.933545] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:35.085 spare 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.085 [2024-11-29 07:46:24.943418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.085 [2024-11-29 07:46:24.945259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.085 [2024-11-29 07:46:24.945315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.085 [2024-11-29 07:46:24.945362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:35.085 [2024-11-29 07:46:24.945433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:35.085 [2024-11-29 07:46:24.945445] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:35.085 [2024-11-29 07:46:24.945666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:35.085 [2024-11-29 07:46:24.945817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:35.085 [2024-11-29 07:46:24.945829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:35.085 [2024-11-29 07:46:24.945960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.085 07:46:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.085 "name": "raid_bdev1", 00:14:35.085 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:35.085 "strip_size_kb": 0, 00:14:35.085 "state": "online", 00:14:35.085 "raid_level": "raid1", 00:14:35.085 "superblock": false, 00:14:35.085 "num_base_bdevs": 4, 00:14:35.085 "num_base_bdevs_discovered": 4, 00:14:35.085 "num_base_bdevs_operational": 4, 00:14:35.085 "base_bdevs_list": [ 00:14:35.085 { 00:14:35.085 "name": "BaseBdev1", 00:14:35.085 "uuid": "41b3c75b-de80-5dc9-be4a-d2f38fb5d5c3", 00:14:35.085 "is_configured": true, 00:14:35.085 "data_offset": 0, 00:14:35.085 "data_size": 65536 00:14:35.085 }, 00:14:35.085 { 00:14:35.085 "name": "BaseBdev2", 00:14:35.085 "uuid": "d11d7778-bfe8-55c5-9baf-cd1bf509a48e", 00:14:35.085 "is_configured": true, 00:14:35.085 "data_offset": 0, 00:14:35.085 "data_size": 65536 00:14:35.086 }, 00:14:35.086 { 00:14:35.086 "name": "BaseBdev3", 00:14:35.086 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:35.086 "is_configured": true, 00:14:35.086 "data_offset": 0, 00:14:35.086 "data_size": 65536 00:14:35.086 }, 00:14:35.086 { 00:14:35.086 "name": "BaseBdev4", 00:14:35.086 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:35.086 "is_configured": true, 00:14:35.086 "data_offset": 0, 00:14:35.086 "data_size": 65536 00:14:35.086 } 00:14:35.086 ] 00:14:35.086 }' 00:14:35.086 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.086 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.653 [2024-11-29 07:46:25.347004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.653 [2024-11-29 07:46:25.422548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.653 "name": "raid_bdev1", 00:14:35.653 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:35.653 "strip_size_kb": 0, 00:14:35.653 "state": "online", 00:14:35.653 "raid_level": "raid1", 00:14:35.653 "superblock": false, 00:14:35.653 "num_base_bdevs": 4, 00:14:35.653 "num_base_bdevs_discovered": 3, 00:14:35.653 "num_base_bdevs_operational": 3, 00:14:35.653 "base_bdevs_list": [ 00:14:35.653 { 00:14:35.653 "name": null, 00:14:35.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.653 "is_configured": false, 00:14:35.653 "data_offset": 0, 00:14:35.653 "data_size": 65536 00:14:35.653 }, 00:14:35.653 { 00:14:35.653 "name": "BaseBdev2", 00:14:35.653 "uuid": "d11d7778-bfe8-55c5-9baf-cd1bf509a48e", 00:14:35.653 "is_configured": true, 00:14:35.653 "data_offset": 0, 00:14:35.653 "data_size": 65536 00:14:35.653 }, 00:14:35.653 { 00:14:35.653 "name": "BaseBdev3", 00:14:35.653 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:35.653 "is_configured": true, 00:14:35.653 "data_offset": 0, 00:14:35.653 "data_size": 65536 00:14:35.653 }, 00:14:35.653 { 00:14:35.653 "name": "BaseBdev4", 00:14:35.653 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:35.653 "is_configured": true, 00:14:35.653 "data_offset": 0, 00:14:35.653 "data_size": 65536 00:14:35.653 } 00:14:35.653 ] 00:14:35.653 }' 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.653 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.653 [2024-11-29 07:46:25.534012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:35.653 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:35.653 Zero copy mechanism will not be used. 00:14:35.653 Running I/O for 60 seconds... 00:14:36.223 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:36.223 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.223 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.223 [2024-11-29 07:46:25.882981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.223 07:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.223 07:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:36.223 [2024-11-29 07:46:25.948934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:36.223 [2024-11-29 07:46:25.950859] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.223 [2024-11-29 07:46:26.066430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:36.223 [2024-11-29 07:46:26.066946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:36.483 [2024-11-29 07:46:26.181483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:36.483 [2024-11-29 07:46:26.182236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:36.743 [2024-11-29 07:46:26.526833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:37.003 167.00 IOPS, 501.00 MiB/s [2024-11-29T07:46:26.948Z] [2024-11-29 07:46:26.747675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:37.003 [2024-11-29 07:46:26.748021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:37.003 07:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.003 07:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.003 07:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.003 07:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.003 07:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.264 07:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.264 07:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.264 07:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.264 07:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.264 07:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.264 [2024-11-29 07:46:26.981921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:37.264 07:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.264 "name": "raid_bdev1", 00:14:37.264 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:37.264 "strip_size_kb": 0, 00:14:37.264 "state": "online", 00:14:37.264 "raid_level": "raid1", 00:14:37.264 "superblock": false, 00:14:37.264 "num_base_bdevs": 4, 00:14:37.264 "num_base_bdevs_discovered": 4, 00:14:37.264 "num_base_bdevs_operational": 4, 00:14:37.264 "process": { 00:14:37.264 "type": "rebuild", 00:14:37.264 "target": "spare", 00:14:37.264 "progress": { 00:14:37.264 "blocks": 12288, 00:14:37.264 "percent": 18 00:14:37.264 } 00:14:37.264 }, 00:14:37.264 "base_bdevs_list": [ 00:14:37.264 { 00:14:37.264 "name": "spare", 00:14:37.264 "uuid": "23dac163-51cd-5da9-a6c8-ca9951e16dff", 00:14:37.264 "is_configured": true, 00:14:37.264 "data_offset": 0, 00:14:37.264 "data_size": 65536 00:14:37.264 }, 00:14:37.264 { 00:14:37.264 "name": "BaseBdev2", 00:14:37.264 "uuid": "d11d7778-bfe8-55c5-9baf-cd1bf509a48e", 00:14:37.264 "is_configured": true, 00:14:37.264 "data_offset": 0, 00:14:37.264 "data_size": 65536 00:14:37.264 }, 00:14:37.264 { 00:14:37.264 "name": "BaseBdev3", 00:14:37.264 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:37.264 "is_configured": true, 00:14:37.264 "data_offset": 0, 00:14:37.264 "data_size": 65536 00:14:37.264 }, 00:14:37.264 { 00:14:37.264 "name": "BaseBdev4", 00:14:37.264 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:37.264 "is_configured": true, 00:14:37.264 "data_offset": 0, 00:14:37.264 "data_size": 65536 00:14:37.264 } 00:14:37.264 ] 00:14:37.264 }' 00:14:37.264 07:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.264 [2024-11-29 07:46:27.057007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.264 [2024-11-29 07:46:27.091719] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:37.264 [2024-11-29 07:46:27.109392] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:37.264 [2024-11-29 07:46:27.125843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.264 [2024-11-29 07:46:27.125960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.264 [2024-11-29 07:46:27.125987] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:37.264 [2024-11-29 07:46:27.159528] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.264 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.525 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.525 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.525 "name": "raid_bdev1", 00:14:37.525 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:37.525 "strip_size_kb": 0, 00:14:37.525 "state": "online", 00:14:37.525 "raid_level": "raid1", 00:14:37.525 "superblock": false, 00:14:37.525 "num_base_bdevs": 4, 00:14:37.525 "num_base_bdevs_discovered": 3, 00:14:37.525 "num_base_bdevs_operational": 3, 00:14:37.525 "base_bdevs_list": [ 00:14:37.525 { 00:14:37.525 "name": null, 00:14:37.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.525 "is_configured": false, 00:14:37.525 "data_offset": 0, 00:14:37.525 "data_size": 65536 00:14:37.525 }, 00:14:37.525 { 00:14:37.525 "name": "BaseBdev2", 00:14:37.525 "uuid": "d11d7778-bfe8-55c5-9baf-cd1bf509a48e", 00:14:37.525 "is_configured": true, 00:14:37.525 "data_offset": 0, 00:14:37.525 "data_size": 65536 00:14:37.525 }, 00:14:37.525 { 00:14:37.525 "name": "BaseBdev3", 00:14:37.525 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:37.525 "is_configured": true, 00:14:37.525 "data_offset": 0, 00:14:37.525 "data_size": 65536 00:14:37.525 }, 00:14:37.525 { 00:14:37.525 "name": "BaseBdev4", 00:14:37.525 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:37.525 "is_configured": true, 00:14:37.525 "data_offset": 0, 00:14:37.525 "data_size": 65536 00:14:37.525 } 00:14:37.525 ] 00:14:37.525 }' 00:14:37.525 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.525 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.785 175.00 IOPS, 525.00 MiB/s [2024-11-29T07:46:27.731Z] 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.786 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.786 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.786 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.786 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.786 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.786 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.786 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.786 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.786 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.786 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.786 "name": "raid_bdev1", 00:14:37.786 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:37.786 "strip_size_kb": 0, 00:14:37.786 "state": "online", 00:14:37.786 "raid_level": "raid1", 00:14:37.786 "superblock": false, 00:14:37.786 "num_base_bdevs": 4, 00:14:37.786 "num_base_bdevs_discovered": 3, 00:14:37.786 "num_base_bdevs_operational": 3, 00:14:37.786 "base_bdevs_list": [ 00:14:37.786 { 00:14:37.786 "name": null, 00:14:37.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.786 "is_configured": false, 00:14:37.786 "data_offset": 0, 00:14:37.786 "data_size": 65536 00:14:37.786 }, 00:14:37.786 { 00:14:37.786 "name": "BaseBdev2", 00:14:37.786 "uuid": "d11d7778-bfe8-55c5-9baf-cd1bf509a48e", 00:14:37.786 "is_configured": true, 00:14:37.786 "data_offset": 0, 00:14:37.786 "data_size": 65536 00:14:37.786 }, 00:14:37.786 { 00:14:37.786 "name": "BaseBdev3", 00:14:37.786 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:37.786 "is_configured": true, 00:14:37.786 "data_offset": 0, 00:14:37.786 "data_size": 65536 00:14:37.786 }, 00:14:37.786 { 00:14:37.786 "name": "BaseBdev4", 00:14:37.786 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:37.786 "is_configured": true, 00:14:37.786 "data_offset": 0, 00:14:37.786 "data_size": 65536 00:14:37.786 } 00:14:37.786 ] 00:14:37.786 }' 00:14:37.786 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.047 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.047 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.047 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.047 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.047 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.047 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.047 [2024-11-29 07:46:27.813604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.047 07:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.047 07:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:38.047 [2024-11-29 07:46:27.878538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:38.047 [2024-11-29 07:46:27.880471] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.307 [2024-11-29 07:46:27.995848] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:38.307 [2024-11-29 07:46:27.996431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:38.308 [2024-11-29 07:46:28.225958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.308 [2024-11-29 07:46:28.226752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.878 176.33 IOPS, 529.00 MiB/s [2024-11-29T07:46:28.823Z] [2024-11-29 07:46:28.560433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:38.878 [2024-11-29 07:46:28.777720] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:38.878 [2024-11-29 07:46:28.778140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:39.138 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.138 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.138 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.138 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.138 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.138 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.138 07:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.138 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.138 07:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.138 07:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.138 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.138 "name": "raid_bdev1", 00:14:39.138 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:39.138 "strip_size_kb": 0, 00:14:39.138 "state": "online", 00:14:39.138 "raid_level": "raid1", 00:14:39.138 "superblock": false, 00:14:39.138 "num_base_bdevs": 4, 00:14:39.138 "num_base_bdevs_discovered": 4, 00:14:39.138 "num_base_bdevs_operational": 4, 00:14:39.138 "process": { 00:14:39.138 "type": "rebuild", 00:14:39.138 "target": "spare", 00:14:39.138 "progress": { 00:14:39.138 "blocks": 10240, 00:14:39.138 "percent": 15 00:14:39.138 } 00:14:39.138 }, 00:14:39.138 "base_bdevs_list": [ 00:14:39.138 { 00:14:39.138 "name": "spare", 00:14:39.138 "uuid": "23dac163-51cd-5da9-a6c8-ca9951e16dff", 00:14:39.138 "is_configured": true, 00:14:39.138 "data_offset": 0, 00:14:39.138 "data_size": 65536 00:14:39.138 }, 00:14:39.138 { 00:14:39.138 "name": "BaseBdev2", 00:14:39.138 "uuid": "d11d7778-bfe8-55c5-9baf-cd1bf509a48e", 00:14:39.138 "is_configured": true, 00:14:39.138 "data_offset": 0, 00:14:39.138 "data_size": 65536 00:14:39.138 }, 00:14:39.139 { 00:14:39.139 "name": "BaseBdev3", 00:14:39.139 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:39.139 "is_configured": true, 00:14:39.139 "data_offset": 0, 00:14:39.139 "data_size": 65536 00:14:39.139 }, 00:14:39.139 { 00:14:39.139 "name": "BaseBdev4", 00:14:39.139 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:39.139 "is_configured": true, 00:14:39.139 "data_offset": 0, 00:14:39.139 "data_size": 65536 00:14:39.139 } 00:14:39.139 ] 00:14:39.139 }' 00:14:39.139 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.139 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.139 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.139 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.139 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:39.139 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:39.139 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:39.139 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:39.139 07:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:39.139 07:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.139 07:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.139 [2024-11-29 07:46:28.971563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:39.399 [2024-11-29 07:46:29.101604] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:39.399 [2024-11-29 07:46:29.101715] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.399 "name": "raid_bdev1", 00:14:39.399 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:39.399 "strip_size_kb": 0, 00:14:39.399 "state": "online", 00:14:39.399 "raid_level": "raid1", 00:14:39.399 "superblock": false, 00:14:39.399 "num_base_bdevs": 4, 00:14:39.399 "num_base_bdevs_discovered": 3, 00:14:39.399 "num_base_bdevs_operational": 3, 00:14:39.399 "process": { 00:14:39.399 "type": "rebuild", 00:14:39.399 "target": "spare", 00:14:39.399 "progress": { 00:14:39.399 "blocks": 12288, 00:14:39.399 "percent": 18 00:14:39.399 } 00:14:39.399 }, 00:14:39.399 "base_bdevs_list": [ 00:14:39.399 { 00:14:39.399 "name": "spare", 00:14:39.399 "uuid": "23dac163-51cd-5da9-a6c8-ca9951e16dff", 00:14:39.399 "is_configured": true, 00:14:39.399 "data_offset": 0, 00:14:39.399 "data_size": 65536 00:14:39.399 }, 00:14:39.399 { 00:14:39.399 "name": null, 00:14:39.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.399 "is_configured": false, 00:14:39.399 "data_offset": 0, 00:14:39.399 "data_size": 65536 00:14:39.399 }, 00:14:39.399 { 00:14:39.399 "name": "BaseBdev3", 00:14:39.399 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:39.399 "is_configured": true, 00:14:39.399 "data_offset": 0, 00:14:39.399 "data_size": 65536 00:14:39.399 }, 00:14:39.399 { 00:14:39.399 "name": "BaseBdev4", 00:14:39.399 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:39.399 "is_configured": true, 00:14:39.399 "data_offset": 0, 00:14:39.399 "data_size": 65536 00:14:39.399 } 00:14:39.399 ] 00:14:39.399 }' 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.399 [2024-11-29 07:46:29.247522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.399 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.399 "name": "raid_bdev1", 00:14:39.399 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:39.399 "strip_size_kb": 0, 00:14:39.399 "state": "online", 00:14:39.399 "raid_level": "raid1", 00:14:39.399 "superblock": false, 00:14:39.399 "num_base_bdevs": 4, 00:14:39.399 "num_base_bdevs_discovered": 3, 00:14:39.399 "num_base_bdevs_operational": 3, 00:14:39.400 "process": { 00:14:39.400 "type": "rebuild", 00:14:39.400 "target": "spare", 00:14:39.400 "progress": { 00:14:39.400 "blocks": 12288, 00:14:39.400 "percent": 18 00:14:39.400 } 00:14:39.400 }, 00:14:39.400 "base_bdevs_list": [ 00:14:39.400 { 00:14:39.400 "name": "spare", 00:14:39.400 "uuid": "23dac163-51cd-5da9-a6c8-ca9951e16dff", 00:14:39.400 "is_configured": true, 00:14:39.400 "data_offset": 0, 00:14:39.400 "data_size": 65536 00:14:39.400 }, 00:14:39.400 { 00:14:39.400 "name": null, 00:14:39.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.400 "is_configured": false, 00:14:39.400 "data_offset": 0, 00:14:39.400 "data_size": 65536 00:14:39.400 }, 00:14:39.400 { 00:14:39.400 "name": "BaseBdev3", 00:14:39.400 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:39.400 "is_configured": true, 00:14:39.400 "data_offset": 0, 00:14:39.400 "data_size": 65536 00:14:39.400 }, 00:14:39.400 { 00:14:39.400 "name": "BaseBdev4", 00:14:39.400 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:39.400 "is_configured": true, 00:14:39.400 "data_offset": 0, 00:14:39.400 "data_size": 65536 00:14:39.400 } 00:14:39.400 ] 00:14:39.400 }' 00:14:39.400 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.400 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.400 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.660 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.660 07:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:39.660 [2024-11-29 07:46:29.463525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:39.920 154.50 IOPS, 463.50 MiB/s [2024-11-29T07:46:29.865Z] [2024-11-29 07:46:29.799724] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:39.920 [2024-11-29 07:46:29.800136] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:40.180 [2024-11-29 07:46:29.909285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:40.440 [2024-11-29 07:46:30.137947] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:40.440 [2024-11-29 07:46:30.138958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:40.440 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.440 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.440 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.440 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.440 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.440 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.440 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.440 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.440 07:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.440 07:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.700 07:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.700 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.700 "name": "raid_bdev1", 00:14:40.700 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:40.700 "strip_size_kb": 0, 00:14:40.700 "state": "online", 00:14:40.700 "raid_level": "raid1", 00:14:40.700 "superblock": false, 00:14:40.700 "num_base_bdevs": 4, 00:14:40.700 "num_base_bdevs_discovered": 3, 00:14:40.700 "num_base_bdevs_operational": 3, 00:14:40.700 "process": { 00:14:40.700 "type": "rebuild", 00:14:40.700 "target": "spare", 00:14:40.700 "progress": { 00:14:40.700 "blocks": 26624, 00:14:40.700 "percent": 40 00:14:40.700 } 00:14:40.700 }, 00:14:40.700 "base_bdevs_list": [ 00:14:40.700 { 00:14:40.700 "name": "spare", 00:14:40.700 "uuid": "23dac163-51cd-5da9-a6c8-ca9951e16dff", 00:14:40.700 "is_configured": true, 00:14:40.700 "data_offset": 0, 00:14:40.700 "data_size": 65536 00:14:40.700 }, 00:14:40.700 { 00:14:40.700 "name": null, 00:14:40.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.700 "is_configured": false, 00:14:40.700 "data_offset": 0, 00:14:40.700 "data_size": 65536 00:14:40.700 }, 00:14:40.700 { 00:14:40.700 "name": "BaseBdev3", 00:14:40.700 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:40.700 "is_configured": true, 00:14:40.700 "data_offset": 0, 00:14:40.700 "data_size": 65536 00:14:40.700 }, 00:14:40.700 { 00:14:40.700 "name": "BaseBdev4", 00:14:40.700 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:40.700 "is_configured": true, 00:14:40.700 "data_offset": 0, 00:14:40.700 "data_size": 65536 00:14:40.700 } 00:14:40.700 ] 00:14:40.700 }' 00:14:40.700 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.700 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.700 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.700 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.700 07:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.270 135.60 IOPS, 406.80 MiB/s [2024-11-29T07:46:31.215Z] [2024-11-29 07:46:31.088574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:41.270 [2024-11-29 07:46:31.094602] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:41.529 [2024-11-29 07:46:31.418233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:41.789 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.789 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.789 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.789 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.789 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.789 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.789 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.789 07:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.789 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.789 07:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.789 120.50 IOPS, 361.50 MiB/s [2024-11-29T07:46:31.734Z] 07:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.789 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.789 "name": "raid_bdev1", 00:14:41.789 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:41.789 "strip_size_kb": 0, 00:14:41.789 "state": "online", 00:14:41.789 "raid_level": "raid1", 00:14:41.789 "superblock": false, 00:14:41.789 "num_base_bdevs": 4, 00:14:41.789 "num_base_bdevs_discovered": 3, 00:14:41.789 "num_base_bdevs_operational": 3, 00:14:41.789 "process": { 00:14:41.789 "type": "rebuild", 00:14:41.789 "target": "spare", 00:14:41.789 "progress": { 00:14:41.789 "blocks": 45056, 00:14:41.789 "percent": 68 00:14:41.789 } 00:14:41.789 }, 00:14:41.789 "base_bdevs_list": [ 00:14:41.789 { 00:14:41.789 "name": "spare", 00:14:41.789 "uuid": "23dac163-51cd-5da9-a6c8-ca9951e16dff", 00:14:41.789 "is_configured": true, 00:14:41.789 "data_offset": 0, 00:14:41.790 "data_size": 65536 00:14:41.790 }, 00:14:41.790 { 00:14:41.790 "name": null, 00:14:41.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.790 "is_configured": false, 00:14:41.790 "data_offset": 0, 00:14:41.790 "data_size": 65536 00:14:41.790 }, 00:14:41.790 { 00:14:41.790 "name": "BaseBdev3", 00:14:41.790 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:41.790 "is_configured": true, 00:14:41.790 "data_offset": 0, 00:14:41.790 "data_size": 65536 00:14:41.790 }, 00:14:41.790 { 00:14:41.790 "name": "BaseBdev4", 00:14:41.790 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:41.790 "is_configured": true, 00:14:41.790 "data_offset": 0, 00:14:41.790 "data_size": 65536 00:14:41.790 } 00:14:41.790 ] 00:14:41.790 }' 00:14:41.790 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.790 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.790 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.790 [2024-11-29 07:46:31.627285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:41.790 [2024-11-29 07:46:31.627651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:41.790 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.790 07:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.734 108.14 IOPS, 324.43 MiB/s [2024-11-29T07:46:32.679Z] [2024-11-29 07:46:32.620659] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:42.734 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.734 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.734 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.734 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.734 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.734 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.734 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.734 07:46:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.734 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.734 07:46:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.734 07:46:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.994 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.994 "name": "raid_bdev1", 00:14:42.994 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:42.994 "strip_size_kb": 0, 00:14:42.994 "state": "online", 00:14:42.994 "raid_level": "raid1", 00:14:42.994 "superblock": false, 00:14:42.994 "num_base_bdevs": 4, 00:14:42.994 "num_base_bdevs_discovered": 3, 00:14:42.994 "num_base_bdevs_operational": 3, 00:14:42.994 "process": { 00:14:42.994 "type": "rebuild", 00:14:42.994 "target": "spare", 00:14:42.994 "progress": { 00:14:42.994 "blocks": 65536, 00:14:42.994 "percent": 100 00:14:42.994 } 00:14:42.994 }, 00:14:42.994 "base_bdevs_list": [ 00:14:42.994 { 00:14:42.994 "name": "spare", 00:14:42.994 "uuid": "23dac163-51cd-5da9-a6c8-ca9951e16dff", 00:14:42.994 "is_configured": true, 00:14:42.994 "data_offset": 0, 00:14:42.994 "data_size": 65536 00:14:42.994 }, 00:14:42.994 { 00:14:42.994 "name": null, 00:14:42.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.994 "is_configured": false, 00:14:42.994 "data_offset": 0, 00:14:42.994 "data_size": 65536 00:14:42.994 }, 00:14:42.994 { 00:14:42.994 "name": "BaseBdev3", 00:14:42.994 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:42.994 "is_configured": true, 00:14:42.994 "data_offset": 0, 00:14:42.994 "data_size": 65536 00:14:42.994 }, 00:14:42.994 { 00:14:42.994 "name": "BaseBdev4", 00:14:42.994 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:42.994 "is_configured": true, 00:14:42.994 "data_offset": 0, 00:14:42.994 "data_size": 65536 00:14:42.994 } 00:14:42.994 ] 00:14:42.994 }' 00:14:42.994 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.994 [2024-11-29 07:46:32.720445] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:42.994 [2024-11-29 07:46:32.723091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.994 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.994 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.994 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.994 07:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.934 99.25 IOPS, 297.75 MiB/s [2024-11-29T07:46:33.879Z] 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.934 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.934 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.934 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.934 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.934 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.934 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.934 07:46:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.934 07:46:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.934 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.934 07:46:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.934 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.934 "name": "raid_bdev1", 00:14:43.934 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:43.934 "strip_size_kb": 0, 00:14:43.934 "state": "online", 00:14:43.934 "raid_level": "raid1", 00:14:43.934 "superblock": false, 00:14:43.934 "num_base_bdevs": 4, 00:14:43.934 "num_base_bdevs_discovered": 3, 00:14:43.934 "num_base_bdevs_operational": 3, 00:14:43.934 "base_bdevs_list": [ 00:14:43.934 { 00:14:43.934 "name": "spare", 00:14:43.934 "uuid": "23dac163-51cd-5da9-a6c8-ca9951e16dff", 00:14:43.934 "is_configured": true, 00:14:43.934 "data_offset": 0, 00:14:43.934 "data_size": 65536 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "name": null, 00:14:43.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.934 "is_configured": false, 00:14:43.934 "data_offset": 0, 00:14:43.934 "data_size": 65536 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "name": "BaseBdev3", 00:14:43.934 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:43.934 "is_configured": true, 00:14:43.934 "data_offset": 0, 00:14:43.934 "data_size": 65536 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "name": "BaseBdev4", 00:14:43.934 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:43.934 "is_configured": true, 00:14:43.934 "data_offset": 0, 00:14:43.934 "data_size": 65536 00:14:43.934 } 00:14:43.934 ] 00:14:43.934 }' 00:14:43.934 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.194 "name": "raid_bdev1", 00:14:44.194 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:44.194 "strip_size_kb": 0, 00:14:44.194 "state": "online", 00:14:44.194 "raid_level": "raid1", 00:14:44.194 "superblock": false, 00:14:44.194 "num_base_bdevs": 4, 00:14:44.194 "num_base_bdevs_discovered": 3, 00:14:44.194 "num_base_bdevs_operational": 3, 00:14:44.194 "base_bdevs_list": [ 00:14:44.194 { 00:14:44.194 "name": "spare", 00:14:44.194 "uuid": "23dac163-51cd-5da9-a6c8-ca9951e16dff", 00:14:44.194 "is_configured": true, 00:14:44.194 "data_offset": 0, 00:14:44.194 "data_size": 65536 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "name": null, 00:14:44.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.194 "is_configured": false, 00:14:44.194 "data_offset": 0, 00:14:44.194 "data_size": 65536 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "name": "BaseBdev3", 00:14:44.194 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:44.194 "is_configured": true, 00:14:44.194 "data_offset": 0, 00:14:44.194 "data_size": 65536 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "name": "BaseBdev4", 00:14:44.194 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:44.194 "is_configured": true, 00:14:44.194 "data_offset": 0, 00:14:44.194 "data_size": 65536 00:14:44.194 } 00:14:44.194 ] 00:14:44.194 }' 00:14:44.194 07:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.194 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.194 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.194 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.194 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:44.194 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.194 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.194 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.195 "name": "raid_bdev1", 00:14:44.195 "uuid": "2608897c-cf15-4433-8049-fedca4965919", 00:14:44.195 "strip_size_kb": 0, 00:14:44.195 "state": "online", 00:14:44.195 "raid_level": "raid1", 00:14:44.195 "superblock": false, 00:14:44.195 "num_base_bdevs": 4, 00:14:44.195 "num_base_bdevs_discovered": 3, 00:14:44.195 "num_base_bdevs_operational": 3, 00:14:44.195 "base_bdevs_list": [ 00:14:44.195 { 00:14:44.195 "name": "spare", 00:14:44.195 "uuid": "23dac163-51cd-5da9-a6c8-ca9951e16dff", 00:14:44.195 "is_configured": true, 00:14:44.195 "data_offset": 0, 00:14:44.195 "data_size": 65536 00:14:44.195 }, 00:14:44.195 { 00:14:44.195 "name": null, 00:14:44.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.195 "is_configured": false, 00:14:44.195 "data_offset": 0, 00:14:44.195 "data_size": 65536 00:14:44.195 }, 00:14:44.195 { 00:14:44.195 "name": "BaseBdev3", 00:14:44.195 "uuid": "b28843ae-fbb6-5c9b-8523-e08076cc570e", 00:14:44.195 "is_configured": true, 00:14:44.195 "data_offset": 0, 00:14:44.195 "data_size": 65536 00:14:44.195 }, 00:14:44.195 { 00:14:44.195 "name": "BaseBdev4", 00:14:44.195 "uuid": "176671ea-fed5-509b-8cd6-81b956a50a41", 00:14:44.195 "is_configured": true, 00:14:44.195 "data_offset": 0, 00:14:44.195 "data_size": 65536 00:14:44.195 } 00:14:44.195 ] 00:14:44.195 }' 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.195 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.765 92.56 IOPS, 277.67 MiB/s [2024-11-29T07:46:34.710Z] 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:44.765 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.765 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.765 [2024-11-29 07:46:34.548735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.765 [2024-11-29 07:46:34.548809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.765 00:14:44.765 Latency(us) 00:14:44.765 [2024-11-29T07:46:34.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.765 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:44.765 raid_bdev1 : 9.13 91.57 274.70 0.00 0.00 15356.31 293.34 115389.15 00:14:44.765 [2024-11-29T07:46:34.710Z] =================================================================================================================== 00:14:44.765 [2024-11-29T07:46:34.710Z] Total : 91.57 274.70 0.00 0.00 15356.31 293.34 115389.15 00:14:44.765 [2024-11-29 07:46:34.669027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.765 [2024-11-29 07:46:34.669190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.765 [2024-11-29 07:46:34.669321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.765 [2024-11-29 07:46:34.669371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, sta{ 00:14:44.765 "results": [ 00:14:44.765 { 00:14:44.765 "job": "raid_bdev1", 00:14:44.765 "core_mask": "0x1", 00:14:44.765 "workload": "randrw", 00:14:44.765 "percentage": 50, 00:14:44.765 "status": "finished", 00:14:44.765 "queue_depth": 2, 00:14:44.765 "io_size": 3145728, 00:14:44.765 "runtime": 9.12998, 00:14:44.765 "iops": 91.56646564395541, 00:14:44.765 "mibps": 274.6993969318662, 00:14:44.765 "io_failed": 0, 00:14:44.765 "io_timeout": 0, 00:14:44.765 "avg_latency_us": 15356.308242619254, 00:14:44.765 "min_latency_us": 293.3379912663755, 00:14:44.765 "max_latency_us": 115389.14934497817 00:14:44.765 } 00:14:44.765 ], 00:14:44.765 "core_count": 1 00:14:44.765 } 00:14:44.765 te offline 00:14:44.765 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.765 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:44.765 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.765 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.765 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.765 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.064 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:45.065 /dev/nbd0 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.065 1+0 records in 00:14:45.065 1+0 records out 00:14:45.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515082 s, 8.0 MB/s 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.065 07:46:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:45.333 /dev/nbd1 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.333 1+0 records in 00:14:45.333 1+0 records out 00:14:45.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299334 s, 13.7 MB/s 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.333 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:45.593 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:45.593 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.593 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:45.593 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:45.593 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:45.593 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.593 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.853 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:45.853 /dev/nbd1 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.113 1+0 records in 00:14:46.113 1+0 records out 00:14:46.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350547 s, 11.7 MB/s 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.113 07:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78452 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78452 ']' 00:14:46.374 07:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78452 00:14:46.635 07:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:46.635 07:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.635 07:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78452 00:14:46.635 killing process with pid 78452 00:14:46.635 Received shutdown signal, test time was about 10.832353 seconds 00:14:46.635 00:14:46.635 Latency(us) 00:14:46.635 [2024-11-29T07:46:36.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.635 [2024-11-29T07:46:36.580Z] =================================================================================================================== 00:14:46.635 [2024-11-29T07:46:36.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:46.635 07:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.635 07:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.635 07:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78452' 00:14:46.635 07:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78452 00:14:46.635 [2024-11-29 07:46:36.347573] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.635 07:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78452 00:14:46.895 [2024-11-29 07:46:36.746373] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:48.279 00:14:48.279 real 0m14.134s 00:14:48.279 user 0m17.630s 00:14:48.279 sys 0m1.795s 00:14:48.279 ************************************ 00:14:48.279 END TEST raid_rebuild_test_io 00:14:48.279 ************************************ 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.279 07:46:37 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:48.279 07:46:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:48.279 07:46:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.279 07:46:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:48.279 ************************************ 00:14:48.279 START TEST raid_rebuild_test_sb_io 00:14:48.279 ************************************ 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78880 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78880 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78880 ']' 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.279 07:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.279 [2024-11-29 07:46:38.064535] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:48.279 [2024-11-29 07:46:38.064813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78880 ] 00:14:48.280 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:48.280 Zero copy mechanism will not be used. 00:14:48.539 [2024-11-29 07:46:38.260378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.539 [2024-11-29 07:46:38.360188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.799 [2024-11-29 07:46:38.533120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.799 [2024-11-29 07:46:38.533221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.059 BaseBdev1_malloc 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.059 [2024-11-29 07:46:38.936637] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:49.059 [2024-11-29 07:46:38.936765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.059 [2024-11-29 07:46:38.936792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:49.059 [2024-11-29 07:46:38.936804] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.059 [2024-11-29 07:46:38.938836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.059 [2024-11-29 07:46:38.938878] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:49.059 BaseBdev1 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.059 BaseBdev2_malloc 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.059 [2024-11-29 07:46:38.986472] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:49.059 [2024-11-29 07:46:38.986527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.059 [2024-11-29 07:46:38.986546] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:49.059 [2024-11-29 07:46:38.986557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.059 [2024-11-29 07:46:38.988597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.059 [2024-11-29 07:46:38.988638] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:49.059 BaseBdev2 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.059 07:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.319 BaseBdev3_malloc 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.319 [2024-11-29 07:46:39.049931] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:49.319 [2024-11-29 07:46:39.049980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.319 [2024-11-29 07:46:39.050015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:49.319 [2024-11-29 07:46:39.050026] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.319 [2024-11-29 07:46:39.052023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.319 [2024-11-29 07:46:39.052064] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:49.319 BaseBdev3 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.319 BaseBdev4_malloc 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.319 [2024-11-29 07:46:39.101492] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:49.319 [2024-11-29 07:46:39.101561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.319 [2024-11-29 07:46:39.101580] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:49.319 [2024-11-29 07:46:39.101590] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.319 [2024-11-29 07:46:39.103543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.319 [2024-11-29 07:46:39.103584] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:49.319 BaseBdev4 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.319 spare_malloc 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.319 spare_delay 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.319 [2024-11-29 07:46:39.165416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:49.319 [2024-11-29 07:46:39.165462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.319 [2024-11-29 07:46:39.165493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:49.319 [2024-11-29 07:46:39.165503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.319 [2024-11-29 07:46:39.167459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.319 [2024-11-29 07:46:39.167498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:49.319 spare 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.319 [2024-11-29 07:46:39.177440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.319 [2024-11-29 07:46:39.179170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.319 [2024-11-29 07:46:39.179232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.319 [2024-11-29 07:46:39.179280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.319 [2024-11-29 07:46:39.179449] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:49.319 [2024-11-29 07:46:39.179464] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:49.319 [2024-11-29 07:46:39.179691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:49.319 [2024-11-29 07:46:39.179869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:49.319 [2024-11-29 07:46:39.179881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:49.319 [2024-11-29 07:46:39.180035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.319 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.319 "name": "raid_bdev1", 00:14:49.319 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:49.319 "strip_size_kb": 0, 00:14:49.319 "state": "online", 00:14:49.319 "raid_level": "raid1", 00:14:49.319 "superblock": true, 00:14:49.319 "num_base_bdevs": 4, 00:14:49.319 "num_base_bdevs_discovered": 4, 00:14:49.319 "num_base_bdevs_operational": 4, 00:14:49.319 "base_bdevs_list": [ 00:14:49.319 { 00:14:49.319 "name": "BaseBdev1", 00:14:49.319 "uuid": "20e074c4-5f31-554e-94b2-369679b51c95", 00:14:49.319 "is_configured": true, 00:14:49.319 "data_offset": 2048, 00:14:49.319 "data_size": 63488 00:14:49.319 }, 00:14:49.319 { 00:14:49.319 "name": "BaseBdev2", 00:14:49.319 "uuid": "ca089308-fc9f-5a86-8021-a632ab4960ae", 00:14:49.319 "is_configured": true, 00:14:49.320 "data_offset": 2048, 00:14:49.320 "data_size": 63488 00:14:49.320 }, 00:14:49.320 { 00:14:49.320 "name": "BaseBdev3", 00:14:49.320 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:49.320 "is_configured": true, 00:14:49.320 "data_offset": 2048, 00:14:49.320 "data_size": 63488 00:14:49.320 }, 00:14:49.320 { 00:14:49.320 "name": "BaseBdev4", 00:14:49.320 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:49.320 "is_configured": true, 00:14:49.320 "data_offset": 2048, 00:14:49.320 "data_size": 63488 00:14:49.320 } 00:14:49.320 ] 00:14:49.320 }' 00:14:49.320 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.320 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.889 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:49.889 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:49.889 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.889 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.889 [2024-11-29 07:46:39.589000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.890 [2024-11-29 07:46:39.668536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.890 "name": "raid_bdev1", 00:14:49.890 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:49.890 "strip_size_kb": 0, 00:14:49.890 "state": "online", 00:14:49.890 "raid_level": "raid1", 00:14:49.890 "superblock": true, 00:14:49.890 "num_base_bdevs": 4, 00:14:49.890 "num_base_bdevs_discovered": 3, 00:14:49.890 "num_base_bdevs_operational": 3, 00:14:49.890 "base_bdevs_list": [ 00:14:49.890 { 00:14:49.890 "name": null, 00:14:49.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.890 "is_configured": false, 00:14:49.890 "data_offset": 0, 00:14:49.890 "data_size": 63488 00:14:49.890 }, 00:14:49.890 { 00:14:49.890 "name": "BaseBdev2", 00:14:49.890 "uuid": "ca089308-fc9f-5a86-8021-a632ab4960ae", 00:14:49.890 "is_configured": true, 00:14:49.890 "data_offset": 2048, 00:14:49.890 "data_size": 63488 00:14:49.890 }, 00:14:49.890 { 00:14:49.890 "name": "BaseBdev3", 00:14:49.890 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:49.890 "is_configured": true, 00:14:49.890 "data_offset": 2048, 00:14:49.890 "data_size": 63488 00:14:49.890 }, 00:14:49.890 { 00:14:49.890 "name": "BaseBdev4", 00:14:49.890 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:49.890 "is_configured": true, 00:14:49.890 "data_offset": 2048, 00:14:49.890 "data_size": 63488 00:14:49.890 } 00:14:49.890 ] 00:14:49.890 }' 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.890 07:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.890 [2024-11-29 07:46:39.755217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:49.890 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:49.890 Zero copy mechanism will not be used. 00:14:49.890 Running I/O for 60 seconds... 00:14:50.460 07:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:50.460 07:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.460 07:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.460 [2024-11-29 07:46:40.110934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.460 07:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.460 07:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:50.460 [2024-11-29 07:46:40.163757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:50.460 [2024-11-29 07:46:40.165740] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.460 [2024-11-29 07:46:40.287525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:50.460 [2024-11-29 07:46:40.288948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:50.721 [2024-11-29 07:46:40.506062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:50.721 [2024-11-29 07:46:40.506955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:51.241 158.00 IOPS, 474.00 MiB/s [2024-11-29T07:46:41.186Z] [2024-11-29 07:46:40.968825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:51.241 [2024-11-29 07:46:40.969184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:51.241 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.241 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.241 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.241 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.241 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.241 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.241 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.241 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.241 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.241 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.501 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.501 "name": "raid_bdev1", 00:14:51.501 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:51.501 "strip_size_kb": 0, 00:14:51.501 "state": "online", 00:14:51.501 "raid_level": "raid1", 00:14:51.501 "superblock": true, 00:14:51.501 "num_base_bdevs": 4, 00:14:51.501 "num_base_bdevs_discovered": 4, 00:14:51.501 "num_base_bdevs_operational": 4, 00:14:51.501 "process": { 00:14:51.501 "type": "rebuild", 00:14:51.501 "target": "spare", 00:14:51.501 "progress": { 00:14:51.501 "blocks": 10240, 00:14:51.501 "percent": 16 00:14:51.501 } 00:14:51.501 }, 00:14:51.501 "base_bdevs_list": [ 00:14:51.501 { 00:14:51.501 "name": "spare", 00:14:51.501 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:14:51.501 "is_configured": true, 00:14:51.501 "data_offset": 2048, 00:14:51.501 "data_size": 63488 00:14:51.501 }, 00:14:51.501 { 00:14:51.501 "name": "BaseBdev2", 00:14:51.501 "uuid": "ca089308-fc9f-5a86-8021-a632ab4960ae", 00:14:51.501 "is_configured": true, 00:14:51.501 "data_offset": 2048, 00:14:51.501 "data_size": 63488 00:14:51.501 }, 00:14:51.501 { 00:14:51.501 "name": "BaseBdev3", 00:14:51.501 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:51.501 "is_configured": true, 00:14:51.501 "data_offset": 2048, 00:14:51.501 "data_size": 63488 00:14:51.501 }, 00:14:51.501 { 00:14:51.501 "name": "BaseBdev4", 00:14:51.501 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:51.501 "is_configured": true, 00:14:51.501 "data_offset": 2048, 00:14:51.501 "data_size": 63488 00:14:51.501 } 00:14:51.501 ] 00:14:51.501 }' 00:14:51.501 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.501 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.501 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.501 [2024-11-29 07:46:41.293179] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:51.501 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.501 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:51.501 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.501 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.501 [2024-11-29 07:46:41.313876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.501 [2024-11-29 07:46:41.409021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:51.501 [2024-11-29 07:46:41.410347] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.501 [2024-11-29 07:46:41.419187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.501 [2024-11-29 07:46:41.419224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.501 [2024-11-29 07:46:41.419236] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.760 [2024-11-29 07:46:41.446307] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.760 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.761 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.761 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.761 "name": "raid_bdev1", 00:14:51.761 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:51.761 "strip_size_kb": 0, 00:14:51.761 "state": "online", 00:14:51.761 "raid_level": "raid1", 00:14:51.761 "superblock": true, 00:14:51.761 "num_base_bdevs": 4, 00:14:51.761 "num_base_bdevs_discovered": 3, 00:14:51.761 "num_base_bdevs_operational": 3, 00:14:51.761 "base_bdevs_list": [ 00:14:51.761 { 00:14:51.761 "name": null, 00:14:51.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.761 "is_configured": false, 00:14:51.761 "data_offset": 0, 00:14:51.761 "data_size": 63488 00:14:51.761 }, 00:14:51.761 { 00:14:51.761 "name": "BaseBdev2", 00:14:51.761 "uuid": "ca089308-fc9f-5a86-8021-a632ab4960ae", 00:14:51.761 "is_configured": true, 00:14:51.761 "data_offset": 2048, 00:14:51.761 "data_size": 63488 00:14:51.761 }, 00:14:51.761 { 00:14:51.761 "name": "BaseBdev3", 00:14:51.761 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:51.761 "is_configured": true, 00:14:51.761 "data_offset": 2048, 00:14:51.761 "data_size": 63488 00:14:51.761 }, 00:14:51.761 { 00:14:51.761 "name": "BaseBdev4", 00:14:51.761 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:51.761 "is_configured": true, 00:14:51.761 "data_offset": 2048, 00:14:51.761 "data_size": 63488 00:14:51.761 } 00:14:51.761 ] 00:14:51.761 }' 00:14:51.761 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.761 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.020 154.00 IOPS, 462.00 MiB/s [2024-11-29T07:46:41.965Z] 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.020 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.020 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.020 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.020 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.020 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.020 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.020 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.020 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.020 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.020 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.020 "name": "raid_bdev1", 00:14:52.020 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:52.020 "strip_size_kb": 0, 00:14:52.020 "state": "online", 00:14:52.020 "raid_level": "raid1", 00:14:52.020 "superblock": true, 00:14:52.020 "num_base_bdevs": 4, 00:14:52.020 "num_base_bdevs_discovered": 3, 00:14:52.020 "num_base_bdevs_operational": 3, 00:14:52.020 "base_bdevs_list": [ 00:14:52.020 { 00:14:52.020 "name": null, 00:14:52.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.020 "is_configured": false, 00:14:52.020 "data_offset": 0, 00:14:52.020 "data_size": 63488 00:14:52.020 }, 00:14:52.020 { 00:14:52.020 "name": "BaseBdev2", 00:14:52.021 "uuid": "ca089308-fc9f-5a86-8021-a632ab4960ae", 00:14:52.021 "is_configured": true, 00:14:52.021 "data_offset": 2048, 00:14:52.021 "data_size": 63488 00:14:52.021 }, 00:14:52.021 { 00:14:52.021 "name": "BaseBdev3", 00:14:52.021 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:52.021 "is_configured": true, 00:14:52.021 "data_offset": 2048, 00:14:52.021 "data_size": 63488 00:14:52.021 }, 00:14:52.021 { 00:14:52.021 "name": "BaseBdev4", 00:14:52.021 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:52.021 "is_configured": true, 00:14:52.021 "data_offset": 2048, 00:14:52.021 "data_size": 63488 00:14:52.021 } 00:14:52.021 ] 00:14:52.021 }' 00:14:52.021 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.280 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.280 07:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.280 07:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.280 07:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.280 07:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.280 07:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.280 [2024-11-29 07:46:42.014253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.280 07:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.280 07:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:52.280 [2024-11-29 07:46:42.078137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:52.280 [2024-11-29 07:46:42.080014] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.280 [2024-11-29 07:46:42.188615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:52.281 [2024-11-29 07:46:42.189201] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:52.546 [2024-11-29 07:46:42.300260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:52.546 [2024-11-29 07:46:42.300576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:52.805 [2024-11-29 07:46:42.545403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:52.805 [2024-11-29 07:46:42.655183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:53.326 161.00 IOPS, 483.00 MiB/s [2024-11-29T07:46:43.271Z] [2024-11-29 07:46:43.039454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:53.326 [2024-11-29 07:46:43.040246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.326 "name": "raid_bdev1", 00:14:53.326 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:53.326 "strip_size_kb": 0, 00:14:53.326 "state": "online", 00:14:53.326 "raid_level": "raid1", 00:14:53.326 "superblock": true, 00:14:53.326 "num_base_bdevs": 4, 00:14:53.326 "num_base_bdevs_discovered": 4, 00:14:53.326 "num_base_bdevs_operational": 4, 00:14:53.326 "process": { 00:14:53.326 "type": "rebuild", 00:14:53.326 "target": "spare", 00:14:53.326 "progress": { 00:14:53.326 "blocks": 16384, 00:14:53.326 "percent": 25 00:14:53.326 } 00:14:53.326 }, 00:14:53.326 "base_bdevs_list": [ 00:14:53.326 { 00:14:53.326 "name": "spare", 00:14:53.326 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:14:53.326 "is_configured": true, 00:14:53.326 "data_offset": 2048, 00:14:53.326 "data_size": 63488 00:14:53.326 }, 00:14:53.326 { 00:14:53.326 "name": "BaseBdev2", 00:14:53.326 "uuid": "ca089308-fc9f-5a86-8021-a632ab4960ae", 00:14:53.326 "is_configured": true, 00:14:53.326 "data_offset": 2048, 00:14:53.326 "data_size": 63488 00:14:53.326 }, 00:14:53.326 { 00:14:53.326 "name": "BaseBdev3", 00:14:53.326 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:53.326 "is_configured": true, 00:14:53.326 "data_offset": 2048, 00:14:53.326 "data_size": 63488 00:14:53.326 }, 00:14:53.326 { 00:14:53.326 "name": "BaseBdev4", 00:14:53.326 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:53.326 "is_configured": true, 00:14:53.326 "data_offset": 2048, 00:14:53.326 "data_size": 63488 00:14:53.326 } 00:14:53.326 ] 00:14:53.326 }' 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:53.326 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.326 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.326 [2024-11-29 07:46:43.172131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.586 [2024-11-29 07:46:43.481709] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:53.586 [2024-11-29 07:46:43.481799] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.586 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.846 "name": "raid_bdev1", 00:14:53.846 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:53.846 "strip_size_kb": 0, 00:14:53.846 "state": "online", 00:14:53.846 "raid_level": "raid1", 00:14:53.846 "superblock": true, 00:14:53.846 "num_base_bdevs": 4, 00:14:53.846 "num_base_bdevs_discovered": 3, 00:14:53.846 "num_base_bdevs_operational": 3, 00:14:53.846 "process": { 00:14:53.846 "type": "rebuild", 00:14:53.846 "target": "spare", 00:14:53.846 "progress": { 00:14:53.846 "blocks": 18432, 00:14:53.846 "percent": 29 00:14:53.846 } 00:14:53.846 }, 00:14:53.846 "base_bdevs_list": [ 00:14:53.846 { 00:14:53.846 "name": "spare", 00:14:53.846 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:14:53.846 "is_configured": true, 00:14:53.846 "data_offset": 2048, 00:14:53.846 "data_size": 63488 00:14:53.846 }, 00:14:53.846 { 00:14:53.846 "name": null, 00:14:53.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.846 "is_configured": false, 00:14:53.846 "data_offset": 0, 00:14:53.846 "data_size": 63488 00:14:53.846 }, 00:14:53.846 { 00:14:53.846 "name": "BaseBdev3", 00:14:53.846 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:53.846 "is_configured": true, 00:14:53.846 "data_offset": 2048, 00:14:53.846 "data_size": 63488 00:14:53.846 }, 00:14:53.846 { 00:14:53.846 "name": "BaseBdev4", 00:14:53.846 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:53.846 "is_configured": true, 00:14:53.846 "data_offset": 2048, 00:14:53.846 "data_size": 63488 00:14:53.846 } 00:14:53.846 ] 00:14:53.846 }' 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=482 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.846 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.846 "name": "raid_bdev1", 00:14:53.846 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:53.846 "strip_size_kb": 0, 00:14:53.846 "state": "online", 00:14:53.847 "raid_level": "raid1", 00:14:53.847 "superblock": true, 00:14:53.847 "num_base_bdevs": 4, 00:14:53.847 "num_base_bdevs_discovered": 3, 00:14:53.847 "num_base_bdevs_operational": 3, 00:14:53.847 "process": { 00:14:53.847 "type": "rebuild", 00:14:53.847 "target": "spare", 00:14:53.847 "progress": { 00:14:53.847 "blocks": 20480, 00:14:53.847 "percent": 32 00:14:53.847 } 00:14:53.847 }, 00:14:53.847 "base_bdevs_list": [ 00:14:53.847 { 00:14:53.847 "name": "spare", 00:14:53.847 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:14:53.847 "is_configured": true, 00:14:53.847 "data_offset": 2048, 00:14:53.847 "data_size": 63488 00:14:53.847 }, 00:14:53.847 { 00:14:53.847 "name": null, 00:14:53.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.847 "is_configured": false, 00:14:53.847 "data_offset": 0, 00:14:53.847 "data_size": 63488 00:14:53.847 }, 00:14:53.847 { 00:14:53.847 "name": "BaseBdev3", 00:14:53.847 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:53.847 "is_configured": true, 00:14:53.847 "data_offset": 2048, 00:14:53.847 "data_size": 63488 00:14:53.847 }, 00:14:53.847 { 00:14:53.847 "name": "BaseBdev4", 00:14:53.847 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:53.847 "is_configured": true, 00:14:53.847 "data_offset": 2048, 00:14:53.847 "data_size": 63488 00:14:53.847 } 00:14:53.847 ] 00:14:53.847 }' 00:14:53.847 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.847 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.847 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.847 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.847 07:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.106 136.75 IOPS, 410.25 MiB/s [2024-11-29T07:46:44.051Z] [2024-11-29 07:46:44.043999] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:54.106 [2024-11-29 07:46:44.044360] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:55.046 121.40 IOPS, 364.20 MiB/s [2024-11-29T07:46:44.991Z] 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.046 "name": "raid_bdev1", 00:14:55.046 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:55.046 "strip_size_kb": 0, 00:14:55.046 "state": "online", 00:14:55.046 "raid_level": "raid1", 00:14:55.046 "superblock": true, 00:14:55.046 "num_base_bdevs": 4, 00:14:55.046 "num_base_bdevs_discovered": 3, 00:14:55.046 "num_base_bdevs_operational": 3, 00:14:55.046 "process": { 00:14:55.046 "type": "rebuild", 00:14:55.046 "target": "spare", 00:14:55.046 "progress": { 00:14:55.046 "blocks": 40960, 00:14:55.046 "percent": 64 00:14:55.046 } 00:14:55.046 }, 00:14:55.046 "base_bdevs_list": [ 00:14:55.046 { 00:14:55.046 "name": "spare", 00:14:55.046 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:14:55.046 "is_configured": true, 00:14:55.046 "data_offset": 2048, 00:14:55.046 "data_size": 63488 00:14:55.046 }, 00:14:55.046 { 00:14:55.046 "name": null, 00:14:55.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.046 "is_configured": false, 00:14:55.046 "data_offset": 0, 00:14:55.046 "data_size": 63488 00:14:55.046 }, 00:14:55.046 { 00:14:55.046 "name": "BaseBdev3", 00:14:55.046 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:55.046 "is_configured": true, 00:14:55.046 "data_offset": 2048, 00:14:55.046 "data_size": 63488 00:14:55.046 }, 00:14:55.046 { 00:14:55.046 "name": "BaseBdev4", 00:14:55.046 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:55.046 "is_configured": true, 00:14:55.046 "data_offset": 2048, 00:14:55.046 "data_size": 63488 00:14:55.046 } 00:14:55.046 ] 00:14:55.046 }' 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.046 07:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.306 [2024-11-29 07:46:45.009870] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:55.306 [2024-11-29 07:46:45.214777] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:55.306 [2024-11-29 07:46:45.215345] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:55.875 [2024-11-29 07:46:45.655279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:56.135 107.67 IOPS, 323.00 MiB/s [2024-11-29T07:46:46.080Z] [2024-11-29 07:46:45.874473] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.135 [2024-11-29 07:46:45.979497] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.135 "name": "raid_bdev1", 00:14:56.135 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:56.135 "strip_size_kb": 0, 00:14:56.135 "state": "online", 00:14:56.135 "raid_level": "raid1", 00:14:56.135 "superblock": true, 00:14:56.135 "num_base_bdevs": 4, 00:14:56.135 "num_base_bdevs_discovered": 3, 00:14:56.135 "num_base_bdevs_operational": 3, 00:14:56.135 "process": { 00:14:56.135 "type": "rebuild", 00:14:56.135 "target": "spare", 00:14:56.135 "progress": { 00:14:56.135 "blocks": 63488, 00:14:56.135 "percent": 100 00:14:56.135 } 00:14:56.135 }, 00:14:56.135 "base_bdevs_list": [ 00:14:56.135 { 00:14:56.135 "name": "spare", 00:14:56.135 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:14:56.135 "is_configured": true, 00:14:56.135 "data_offset": 2048, 00:14:56.135 "data_size": 63488 00:14:56.135 }, 00:14:56.135 { 00:14:56.135 "name": null, 00:14:56.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.135 "is_configured": false, 00:14:56.135 "data_offset": 0, 00:14:56.135 "data_size": 63488 00:14:56.135 }, 00:14:56.135 { 00:14:56.135 "name": "BaseBdev3", 00:14:56.135 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:56.135 "is_configured": true, 00:14:56.135 "data_offset": 2048, 00:14:56.135 "data_size": 63488 00:14:56.135 }, 00:14:56.135 { 00:14:56.135 "name": "BaseBdev4", 00:14:56.135 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:56.135 "is_configured": true, 00:14:56.135 "data_offset": 2048, 00:14:56.135 "data_size": 63488 00:14:56.135 } 00:14:56.135 ] 00:14:56.135 }' 00:14:56.135 [2024-11-29 07:46:45.983386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.135 07:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.135 07:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.135 07:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.396 07:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.396 07:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:57.225 96.86 IOPS, 290.57 MiB/s [2024-11-29T07:46:47.170Z] 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.225 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.225 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.225 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.225 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.225 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.225 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.225 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.225 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.225 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.225 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.225 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.225 "name": "raid_bdev1", 00:14:57.225 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:57.225 "strip_size_kb": 0, 00:14:57.225 "state": "online", 00:14:57.225 "raid_level": "raid1", 00:14:57.225 "superblock": true, 00:14:57.225 "num_base_bdevs": 4, 00:14:57.225 "num_base_bdevs_discovered": 3, 00:14:57.225 "num_base_bdevs_operational": 3, 00:14:57.225 "base_bdevs_list": [ 00:14:57.225 { 00:14:57.225 "name": "spare", 00:14:57.225 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:14:57.225 "is_configured": true, 00:14:57.225 "data_offset": 2048, 00:14:57.225 "data_size": 63488 00:14:57.225 }, 00:14:57.225 { 00:14:57.225 "name": null, 00:14:57.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.225 "is_configured": false, 00:14:57.225 "data_offset": 0, 00:14:57.225 "data_size": 63488 00:14:57.225 }, 00:14:57.225 { 00:14:57.225 "name": "BaseBdev3", 00:14:57.225 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:57.225 "is_configured": true, 00:14:57.225 "data_offset": 2048, 00:14:57.225 "data_size": 63488 00:14:57.225 }, 00:14:57.225 { 00:14:57.225 "name": "BaseBdev4", 00:14:57.225 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:57.225 "is_configured": true, 00:14:57.225 "data_offset": 2048, 00:14:57.225 "data_size": 63488 00:14:57.225 } 00:14:57.225 ] 00:14:57.225 }' 00:14:57.225 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.485 "name": "raid_bdev1", 00:14:57.485 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:57.485 "strip_size_kb": 0, 00:14:57.485 "state": "online", 00:14:57.485 "raid_level": "raid1", 00:14:57.485 "superblock": true, 00:14:57.485 "num_base_bdevs": 4, 00:14:57.485 "num_base_bdevs_discovered": 3, 00:14:57.485 "num_base_bdevs_operational": 3, 00:14:57.485 "base_bdevs_list": [ 00:14:57.485 { 00:14:57.485 "name": "spare", 00:14:57.485 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:14:57.485 "is_configured": true, 00:14:57.485 "data_offset": 2048, 00:14:57.485 "data_size": 63488 00:14:57.485 }, 00:14:57.485 { 00:14:57.485 "name": null, 00:14:57.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.485 "is_configured": false, 00:14:57.485 "data_offset": 0, 00:14:57.485 "data_size": 63488 00:14:57.485 }, 00:14:57.485 { 00:14:57.485 "name": "BaseBdev3", 00:14:57.485 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:57.485 "is_configured": true, 00:14:57.485 "data_offset": 2048, 00:14:57.485 "data_size": 63488 00:14:57.485 }, 00:14:57.485 { 00:14:57.485 "name": "BaseBdev4", 00:14:57.485 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:57.485 "is_configured": true, 00:14:57.485 "data_offset": 2048, 00:14:57.485 "data_size": 63488 00:14:57.485 } 00:14:57.485 ] 00:14:57.485 }' 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:57.485 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.486 "name": "raid_bdev1", 00:14:57.486 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:57.486 "strip_size_kb": 0, 00:14:57.486 "state": "online", 00:14:57.486 "raid_level": "raid1", 00:14:57.486 "superblock": true, 00:14:57.486 "num_base_bdevs": 4, 00:14:57.486 "num_base_bdevs_discovered": 3, 00:14:57.486 "num_base_bdevs_operational": 3, 00:14:57.486 "base_bdevs_list": [ 00:14:57.486 { 00:14:57.486 "name": "spare", 00:14:57.486 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:14:57.486 "is_configured": true, 00:14:57.486 "data_offset": 2048, 00:14:57.486 "data_size": 63488 00:14:57.486 }, 00:14:57.486 { 00:14:57.486 "name": null, 00:14:57.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.486 "is_configured": false, 00:14:57.486 "data_offset": 0, 00:14:57.486 "data_size": 63488 00:14:57.486 }, 00:14:57.486 { 00:14:57.486 "name": "BaseBdev3", 00:14:57.486 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:57.486 "is_configured": true, 00:14:57.486 "data_offset": 2048, 00:14:57.486 "data_size": 63488 00:14:57.486 }, 00:14:57.486 { 00:14:57.486 "name": "BaseBdev4", 00:14:57.486 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:57.486 "is_configured": true, 00:14:57.486 "data_offset": 2048, 00:14:57.486 "data_size": 63488 00:14:57.486 } 00:14:57.486 ] 00:14:57.486 }' 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.486 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.056 89.38 IOPS, 268.12 MiB/s [2024-11-29T07:46:48.002Z] 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.057 [2024-11-29 07:46:47.819883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.057 [2024-11-29 07:46:47.819967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.057 00:14:58.057 Latency(us) 00:14:58.057 [2024-11-29T07:46:48.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.057 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:58.057 raid_bdev1 : 8.18 88.16 264.49 0.00 0.00 16047.45 316.59 114931.26 00:14:58.057 [2024-11-29T07:46:48.002Z] =================================================================================================================== 00:14:58.057 [2024-11-29T07:46:48.002Z] Total : 88.16 264.49 0.00 0.00 16047.45 316.59 114931.26 00:14:58.057 [2024-11-29 07:46:47.940360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.057 [2024-11-29 07:46:47.940471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.057 [2024-11-29 07:46:47.940586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.057 [2024-11-29 07:46:47.940652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:58.057 { 00:14:58.057 "results": [ 00:14:58.057 { 00:14:58.057 "job": "raid_bdev1", 00:14:58.057 "core_mask": "0x1", 00:14:58.057 "workload": "randrw", 00:14:58.057 "percentage": 50, 00:14:58.057 "status": "finished", 00:14:58.057 "queue_depth": 2, 00:14:58.057 "io_size": 3145728, 00:14:58.057 "runtime": 8.178065, 00:14:58.057 "iops": 88.16266439555078, 00:14:58.057 "mibps": 264.4879931866523, 00:14:58.057 "io_failed": 0, 00:14:58.057 "io_timeout": 0, 00:14:58.057 "avg_latency_us": 16047.447967100523, 00:14:58.057 "min_latency_us": 316.5903930131004, 00:14:58.057 "max_latency_us": 114931.2558951965 00:14:58.057 } 00:14:58.057 ], 00:14:58.057 "core_count": 1 00:14:58.057 } 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.057 07:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:58.316 /dev/nbd0 00:14:58.316 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:58.316 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:58.316 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:58.316 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:58.316 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:58.316 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.317 1+0 records in 00:14:58.317 1+0 records out 00:14:58.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367814 s, 11.1 MB/s 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.317 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:58.577 /dev/nbd1 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.577 1+0 records in 00:14:58.577 1+0 records out 00:14:58.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037905 s, 10.8 MB/s 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.577 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:58.843 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:58.843 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.843 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:58.843 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:58.843 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:58.843 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.843 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:59.133 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:59.133 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:59.133 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:59.133 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:59.134 07:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:59.134 /dev/nbd1 00:14:59.134 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.417 1+0 records in 00:14:59.417 1+0 records out 00:14:59.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576154 s, 7.1 MB/s 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.417 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:59.418 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:59.418 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:59.418 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.418 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:59.418 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.418 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:59.418 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.418 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.678 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.938 [2024-11-29 07:46:49.635997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:59.938 [2024-11-29 07:46:49.636114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.938 [2024-11-29 07:46:49.636153] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:59.938 [2024-11-29 07:46:49.636191] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.938 [2024-11-29 07:46:49.638332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.938 [2024-11-29 07:46:49.638406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:59.938 [2024-11-29 07:46:49.638538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:59.938 [2024-11-29 07:46:49.638626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.938 [2024-11-29 07:46:49.638801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.938 [2024-11-29 07:46:49.638937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.938 spare 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.938 [2024-11-29 07:46:49.738862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:59.938 [2024-11-29 07:46:49.738926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:59.938 [2024-11-29 07:46:49.739259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:59.938 [2024-11-29 07:46:49.739463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:59.938 [2024-11-29 07:46:49.739504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:59.938 [2024-11-29 07:46:49.739717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.938 "name": "raid_bdev1", 00:14:59.938 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:14:59.938 "strip_size_kb": 0, 00:14:59.938 "state": "online", 00:14:59.938 "raid_level": "raid1", 00:14:59.938 "superblock": true, 00:14:59.938 "num_base_bdevs": 4, 00:14:59.938 "num_base_bdevs_discovered": 3, 00:14:59.938 "num_base_bdevs_operational": 3, 00:14:59.938 "base_bdevs_list": [ 00:14:59.938 { 00:14:59.938 "name": "spare", 00:14:59.938 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:14:59.938 "is_configured": true, 00:14:59.938 "data_offset": 2048, 00:14:59.938 "data_size": 63488 00:14:59.938 }, 00:14:59.938 { 00:14:59.938 "name": null, 00:14:59.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.938 "is_configured": false, 00:14:59.938 "data_offset": 2048, 00:14:59.938 "data_size": 63488 00:14:59.938 }, 00:14:59.938 { 00:14:59.938 "name": "BaseBdev3", 00:14:59.938 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:14:59.938 "is_configured": true, 00:14:59.938 "data_offset": 2048, 00:14:59.938 "data_size": 63488 00:14:59.938 }, 00:14:59.938 { 00:14:59.938 "name": "BaseBdev4", 00:14:59.938 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:14:59.938 "is_configured": true, 00:14:59.938 "data_offset": 2048, 00:14:59.938 "data_size": 63488 00:14:59.938 } 00:14:59.938 ] 00:14:59.938 }' 00:14:59.938 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.939 07:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.198 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.198 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.198 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.198 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.198 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.198 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.198 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.198 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.198 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.458 "name": "raid_bdev1", 00:15:00.458 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:15:00.458 "strip_size_kb": 0, 00:15:00.458 "state": "online", 00:15:00.458 "raid_level": "raid1", 00:15:00.458 "superblock": true, 00:15:00.458 "num_base_bdevs": 4, 00:15:00.458 "num_base_bdevs_discovered": 3, 00:15:00.458 "num_base_bdevs_operational": 3, 00:15:00.458 "base_bdevs_list": [ 00:15:00.458 { 00:15:00.458 "name": "spare", 00:15:00.458 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:15:00.458 "is_configured": true, 00:15:00.458 "data_offset": 2048, 00:15:00.458 "data_size": 63488 00:15:00.458 }, 00:15:00.458 { 00:15:00.458 "name": null, 00:15:00.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.458 "is_configured": false, 00:15:00.458 "data_offset": 2048, 00:15:00.458 "data_size": 63488 00:15:00.458 }, 00:15:00.458 { 00:15:00.458 "name": "BaseBdev3", 00:15:00.458 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:15:00.458 "is_configured": true, 00:15:00.458 "data_offset": 2048, 00:15:00.458 "data_size": 63488 00:15:00.458 }, 00:15:00.458 { 00:15:00.458 "name": "BaseBdev4", 00:15:00.458 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:15:00.458 "is_configured": true, 00:15:00.458 "data_offset": 2048, 00:15:00.458 "data_size": 63488 00:15:00.458 } 00:15:00.458 ] 00:15:00.458 }' 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.458 [2024-11-29 07:46:50.307212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.458 "name": "raid_bdev1", 00:15:00.458 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:15:00.458 "strip_size_kb": 0, 00:15:00.458 "state": "online", 00:15:00.458 "raid_level": "raid1", 00:15:00.458 "superblock": true, 00:15:00.458 "num_base_bdevs": 4, 00:15:00.458 "num_base_bdevs_discovered": 2, 00:15:00.458 "num_base_bdevs_operational": 2, 00:15:00.458 "base_bdevs_list": [ 00:15:00.458 { 00:15:00.458 "name": null, 00:15:00.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.458 "is_configured": false, 00:15:00.458 "data_offset": 0, 00:15:00.458 "data_size": 63488 00:15:00.458 }, 00:15:00.458 { 00:15:00.458 "name": null, 00:15:00.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.458 "is_configured": false, 00:15:00.458 "data_offset": 2048, 00:15:00.458 "data_size": 63488 00:15:00.458 }, 00:15:00.458 { 00:15:00.458 "name": "BaseBdev3", 00:15:00.458 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:15:00.458 "is_configured": true, 00:15:00.458 "data_offset": 2048, 00:15:00.458 "data_size": 63488 00:15:00.458 }, 00:15:00.458 { 00:15:00.458 "name": "BaseBdev4", 00:15:00.458 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:15:00.458 "is_configured": true, 00:15:00.458 "data_offset": 2048, 00:15:00.458 "data_size": 63488 00:15:00.458 } 00:15:00.458 ] 00:15:00.458 }' 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.458 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.028 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.028 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.028 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.028 [2024-11-29 07:46:50.782446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.028 [2024-11-29 07:46:50.782693] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:01.028 [2024-11-29 07:46:50.782758] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:01.028 [2024-11-29 07:46:50.782858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.028 [2024-11-29 07:46:50.797751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:01.028 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.028 07:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:01.028 [2024-11-29 07:46:50.799590] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.967 "name": "raid_bdev1", 00:15:01.967 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:15:01.967 "strip_size_kb": 0, 00:15:01.967 "state": "online", 00:15:01.967 "raid_level": "raid1", 00:15:01.967 "superblock": true, 00:15:01.967 "num_base_bdevs": 4, 00:15:01.967 "num_base_bdevs_discovered": 3, 00:15:01.967 "num_base_bdevs_operational": 3, 00:15:01.967 "process": { 00:15:01.967 "type": "rebuild", 00:15:01.967 "target": "spare", 00:15:01.967 "progress": { 00:15:01.967 "blocks": 20480, 00:15:01.967 "percent": 32 00:15:01.967 } 00:15:01.967 }, 00:15:01.967 "base_bdevs_list": [ 00:15:01.967 { 00:15:01.967 "name": "spare", 00:15:01.967 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:15:01.967 "is_configured": true, 00:15:01.967 "data_offset": 2048, 00:15:01.967 "data_size": 63488 00:15:01.967 }, 00:15:01.967 { 00:15:01.967 "name": null, 00:15:01.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.967 "is_configured": false, 00:15:01.967 "data_offset": 2048, 00:15:01.967 "data_size": 63488 00:15:01.967 }, 00:15:01.967 { 00:15:01.967 "name": "BaseBdev3", 00:15:01.967 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:15:01.967 "is_configured": true, 00:15:01.967 "data_offset": 2048, 00:15:01.967 "data_size": 63488 00:15:01.967 }, 00:15:01.967 { 00:15:01.967 "name": "BaseBdev4", 00:15:01.967 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:15:01.967 "is_configured": true, 00:15:01.967 "data_offset": 2048, 00:15:01.967 "data_size": 63488 00:15:01.967 } 00:15:01.967 ] 00:15:01.967 }' 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.967 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.227 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.227 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:02.227 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.227 07:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.227 [2024-11-29 07:46:51.960118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.227 [2024-11-29 07:46:52.004477] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:02.227 [2024-11-29 07:46:52.004599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.227 [2024-11-29 07:46:52.004634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.227 [2024-11-29 07:46:52.004658] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.227 "name": "raid_bdev1", 00:15:02.227 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:15:02.227 "strip_size_kb": 0, 00:15:02.227 "state": "online", 00:15:02.227 "raid_level": "raid1", 00:15:02.227 "superblock": true, 00:15:02.227 "num_base_bdevs": 4, 00:15:02.227 "num_base_bdevs_discovered": 2, 00:15:02.227 "num_base_bdevs_operational": 2, 00:15:02.227 "base_bdevs_list": [ 00:15:02.227 { 00:15:02.227 "name": null, 00:15:02.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.227 "is_configured": false, 00:15:02.227 "data_offset": 0, 00:15:02.227 "data_size": 63488 00:15:02.227 }, 00:15:02.227 { 00:15:02.227 "name": null, 00:15:02.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.227 "is_configured": false, 00:15:02.227 "data_offset": 2048, 00:15:02.227 "data_size": 63488 00:15:02.227 }, 00:15:02.227 { 00:15:02.227 "name": "BaseBdev3", 00:15:02.227 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:15:02.227 "is_configured": true, 00:15:02.227 "data_offset": 2048, 00:15:02.227 "data_size": 63488 00:15:02.227 }, 00:15:02.227 { 00:15:02.227 "name": "BaseBdev4", 00:15:02.227 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:15:02.227 "is_configured": true, 00:15:02.227 "data_offset": 2048, 00:15:02.227 "data_size": 63488 00:15:02.227 } 00:15:02.227 ] 00:15:02.227 }' 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.227 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.797 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:02.797 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.797 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.797 [2024-11-29 07:46:52.491780] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:02.797 [2024-11-29 07:46:52.491910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.797 [2024-11-29 07:46:52.491946] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:02.797 [2024-11-29 07:46:52.491958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.797 [2024-11-29 07:46:52.492448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.797 [2024-11-29 07:46:52.492479] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:02.797 [2024-11-29 07:46:52.492573] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:02.797 [2024-11-29 07:46:52.492593] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:02.797 [2024-11-29 07:46:52.492602] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:02.797 [2024-11-29 07:46:52.492633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.797 spare 00:15:02.797 [2024-11-29 07:46:52.507274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:02.797 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.797 07:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:02.797 [2024-11-29 07:46:52.509039] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:03.736 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.736 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.736 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.736 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.736 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.737 "name": "raid_bdev1", 00:15:03.737 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:15:03.737 "strip_size_kb": 0, 00:15:03.737 "state": "online", 00:15:03.737 "raid_level": "raid1", 00:15:03.737 "superblock": true, 00:15:03.737 "num_base_bdevs": 4, 00:15:03.737 "num_base_bdevs_discovered": 3, 00:15:03.737 "num_base_bdevs_operational": 3, 00:15:03.737 "process": { 00:15:03.737 "type": "rebuild", 00:15:03.737 "target": "spare", 00:15:03.737 "progress": { 00:15:03.737 "blocks": 20480, 00:15:03.737 "percent": 32 00:15:03.737 } 00:15:03.737 }, 00:15:03.737 "base_bdevs_list": [ 00:15:03.737 { 00:15:03.737 "name": "spare", 00:15:03.737 "uuid": "8434fb42-652a-550b-ac66-959255269446", 00:15:03.737 "is_configured": true, 00:15:03.737 "data_offset": 2048, 00:15:03.737 "data_size": 63488 00:15:03.737 }, 00:15:03.737 { 00:15:03.737 "name": null, 00:15:03.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.737 "is_configured": false, 00:15:03.737 "data_offset": 2048, 00:15:03.737 "data_size": 63488 00:15:03.737 }, 00:15:03.737 { 00:15:03.737 "name": "BaseBdev3", 00:15:03.737 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:15:03.737 "is_configured": true, 00:15:03.737 "data_offset": 2048, 00:15:03.737 "data_size": 63488 00:15:03.737 }, 00:15:03.737 { 00:15:03.737 "name": "BaseBdev4", 00:15:03.737 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:15:03.737 "is_configured": true, 00:15:03.737 "data_offset": 2048, 00:15:03.737 "data_size": 63488 00:15:03.737 } 00:15:03.737 ] 00:15:03.737 }' 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.737 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.737 [2024-11-29 07:46:53.672991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.996 [2024-11-29 07:46:53.713889] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:03.996 [2024-11-29 07:46:53.713971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.996 [2024-11-29 07:46:53.713990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.996 [2024-11-29 07:46:53.713997] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.996 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.997 "name": "raid_bdev1", 00:15:03.997 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:15:03.997 "strip_size_kb": 0, 00:15:03.997 "state": "online", 00:15:03.997 "raid_level": "raid1", 00:15:03.997 "superblock": true, 00:15:03.997 "num_base_bdevs": 4, 00:15:03.997 "num_base_bdevs_discovered": 2, 00:15:03.997 "num_base_bdevs_operational": 2, 00:15:03.997 "base_bdevs_list": [ 00:15:03.997 { 00:15:03.997 "name": null, 00:15:03.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.997 "is_configured": false, 00:15:03.997 "data_offset": 0, 00:15:03.997 "data_size": 63488 00:15:03.997 }, 00:15:03.997 { 00:15:03.997 "name": null, 00:15:03.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.997 "is_configured": false, 00:15:03.997 "data_offset": 2048, 00:15:03.997 "data_size": 63488 00:15:03.997 }, 00:15:03.997 { 00:15:03.997 "name": "BaseBdev3", 00:15:03.997 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:15:03.997 "is_configured": true, 00:15:03.997 "data_offset": 2048, 00:15:03.997 "data_size": 63488 00:15:03.997 }, 00:15:03.997 { 00:15:03.997 "name": "BaseBdev4", 00:15:03.997 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:15:03.997 "is_configured": true, 00:15:03.997 "data_offset": 2048, 00:15:03.997 "data_size": 63488 00:15:03.997 } 00:15:03.997 ] 00:15:03.997 }' 00:15:03.997 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.997 07:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.256 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.256 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.256 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.256 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.256 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.256 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.256 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.256 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.256 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.256 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.516 "name": "raid_bdev1", 00:15:04.516 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:15:04.516 "strip_size_kb": 0, 00:15:04.516 "state": "online", 00:15:04.516 "raid_level": "raid1", 00:15:04.516 "superblock": true, 00:15:04.516 "num_base_bdevs": 4, 00:15:04.516 "num_base_bdevs_discovered": 2, 00:15:04.516 "num_base_bdevs_operational": 2, 00:15:04.516 "base_bdevs_list": [ 00:15:04.516 { 00:15:04.516 "name": null, 00:15:04.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.516 "is_configured": false, 00:15:04.516 "data_offset": 0, 00:15:04.516 "data_size": 63488 00:15:04.516 }, 00:15:04.516 { 00:15:04.516 "name": null, 00:15:04.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.516 "is_configured": false, 00:15:04.516 "data_offset": 2048, 00:15:04.516 "data_size": 63488 00:15:04.516 }, 00:15:04.516 { 00:15:04.516 "name": "BaseBdev3", 00:15:04.516 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:15:04.516 "is_configured": true, 00:15:04.516 "data_offset": 2048, 00:15:04.516 "data_size": 63488 00:15:04.516 }, 00:15:04.516 { 00:15:04.516 "name": "BaseBdev4", 00:15:04.516 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:15:04.516 "is_configured": true, 00:15:04.516 "data_offset": 2048, 00:15:04.516 "data_size": 63488 00:15:04.516 } 00:15:04.516 ] 00:15:04.516 }' 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.516 [2024-11-29 07:46:54.317032] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:04.516 [2024-11-29 07:46:54.317093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.516 [2024-11-29 07:46:54.317131] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:04.516 [2024-11-29 07:46:54.317140] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.516 [2024-11-29 07:46:54.317597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.516 [2024-11-29 07:46:54.317620] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.516 [2024-11-29 07:46:54.317704] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:04.516 [2024-11-29 07:46:54.317718] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:04.516 [2024-11-29 07:46:54.317728] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:04.516 [2024-11-29 07:46:54.317738] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:04.516 BaseBdev1 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.516 07:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.456 "name": "raid_bdev1", 00:15:05.456 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:15:05.456 "strip_size_kb": 0, 00:15:05.456 "state": "online", 00:15:05.456 "raid_level": "raid1", 00:15:05.456 "superblock": true, 00:15:05.456 "num_base_bdevs": 4, 00:15:05.456 "num_base_bdevs_discovered": 2, 00:15:05.456 "num_base_bdevs_operational": 2, 00:15:05.456 "base_bdevs_list": [ 00:15:05.456 { 00:15:05.456 "name": null, 00:15:05.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.456 "is_configured": false, 00:15:05.456 "data_offset": 0, 00:15:05.456 "data_size": 63488 00:15:05.456 }, 00:15:05.456 { 00:15:05.456 "name": null, 00:15:05.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.456 "is_configured": false, 00:15:05.456 "data_offset": 2048, 00:15:05.456 "data_size": 63488 00:15:05.456 }, 00:15:05.456 { 00:15:05.456 "name": "BaseBdev3", 00:15:05.456 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:15:05.456 "is_configured": true, 00:15:05.456 "data_offset": 2048, 00:15:05.456 "data_size": 63488 00:15:05.456 }, 00:15:05.456 { 00:15:05.456 "name": "BaseBdev4", 00:15:05.456 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:15:05.456 "is_configured": true, 00:15:05.456 "data_offset": 2048, 00:15:05.456 "data_size": 63488 00:15:05.456 } 00:15:05.456 ] 00:15:05.456 }' 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.456 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.023 "name": "raid_bdev1", 00:15:06.023 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:15:06.023 "strip_size_kb": 0, 00:15:06.023 "state": "online", 00:15:06.023 "raid_level": "raid1", 00:15:06.023 "superblock": true, 00:15:06.023 "num_base_bdevs": 4, 00:15:06.023 "num_base_bdevs_discovered": 2, 00:15:06.023 "num_base_bdevs_operational": 2, 00:15:06.023 "base_bdevs_list": [ 00:15:06.023 { 00:15:06.023 "name": null, 00:15:06.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.023 "is_configured": false, 00:15:06.023 "data_offset": 0, 00:15:06.023 "data_size": 63488 00:15:06.023 }, 00:15:06.023 { 00:15:06.023 "name": null, 00:15:06.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.023 "is_configured": false, 00:15:06.023 "data_offset": 2048, 00:15:06.023 "data_size": 63488 00:15:06.023 }, 00:15:06.023 { 00:15:06.023 "name": "BaseBdev3", 00:15:06.023 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:15:06.023 "is_configured": true, 00:15:06.023 "data_offset": 2048, 00:15:06.023 "data_size": 63488 00:15:06.023 }, 00:15:06.023 { 00:15:06.023 "name": "BaseBdev4", 00:15:06.023 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:15:06.023 "is_configured": true, 00:15:06.023 "data_offset": 2048, 00:15:06.023 "data_size": 63488 00:15:06.023 } 00:15:06.023 ] 00:15:06.023 }' 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.023 [2024-11-29 07:46:55.858680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.023 [2024-11-29 07:46:55.858890] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:06.023 [2024-11-29 07:46:55.858958] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:06.023 request: 00:15:06.023 { 00:15:06.023 "base_bdev": "BaseBdev1", 00:15:06.023 "raid_bdev": "raid_bdev1", 00:15:06.023 "method": "bdev_raid_add_base_bdev", 00:15:06.023 "req_id": 1 00:15:06.023 } 00:15:06.023 Got JSON-RPC error response 00:15:06.023 response: 00:15:06.023 { 00:15:06.023 "code": -22, 00:15:06.023 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:06.023 } 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:06.023 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:06.024 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:06.024 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:06.024 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:06.024 07:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.960 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.219 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.219 "name": "raid_bdev1", 00:15:07.219 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:15:07.219 "strip_size_kb": 0, 00:15:07.219 "state": "online", 00:15:07.219 "raid_level": "raid1", 00:15:07.219 "superblock": true, 00:15:07.219 "num_base_bdevs": 4, 00:15:07.219 "num_base_bdevs_discovered": 2, 00:15:07.219 "num_base_bdevs_operational": 2, 00:15:07.219 "base_bdevs_list": [ 00:15:07.219 { 00:15:07.219 "name": null, 00:15:07.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.219 "is_configured": false, 00:15:07.219 "data_offset": 0, 00:15:07.219 "data_size": 63488 00:15:07.219 }, 00:15:07.219 { 00:15:07.219 "name": null, 00:15:07.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.219 "is_configured": false, 00:15:07.219 "data_offset": 2048, 00:15:07.219 "data_size": 63488 00:15:07.219 }, 00:15:07.219 { 00:15:07.219 "name": "BaseBdev3", 00:15:07.219 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:15:07.219 "is_configured": true, 00:15:07.219 "data_offset": 2048, 00:15:07.219 "data_size": 63488 00:15:07.219 }, 00:15:07.219 { 00:15:07.219 "name": "BaseBdev4", 00:15:07.219 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:15:07.219 "is_configured": true, 00:15:07.219 "data_offset": 2048, 00:15:07.219 "data_size": 63488 00:15:07.219 } 00:15:07.219 ] 00:15:07.219 }' 00:15:07.219 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.219 07:46:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.477 "name": "raid_bdev1", 00:15:07.477 "uuid": "f25b2fdb-6832-4ead-afca-ef55f9b9a7e8", 00:15:07.477 "strip_size_kb": 0, 00:15:07.477 "state": "online", 00:15:07.477 "raid_level": "raid1", 00:15:07.477 "superblock": true, 00:15:07.477 "num_base_bdevs": 4, 00:15:07.477 "num_base_bdevs_discovered": 2, 00:15:07.477 "num_base_bdevs_operational": 2, 00:15:07.477 "base_bdevs_list": [ 00:15:07.477 { 00:15:07.477 "name": null, 00:15:07.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.477 "is_configured": false, 00:15:07.477 "data_offset": 0, 00:15:07.477 "data_size": 63488 00:15:07.477 }, 00:15:07.477 { 00:15:07.477 "name": null, 00:15:07.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.477 "is_configured": false, 00:15:07.477 "data_offset": 2048, 00:15:07.477 "data_size": 63488 00:15:07.477 }, 00:15:07.477 { 00:15:07.477 "name": "BaseBdev3", 00:15:07.477 "uuid": "286e870f-2496-57af-b83b-48f932c761aa", 00:15:07.477 "is_configured": true, 00:15:07.477 "data_offset": 2048, 00:15:07.477 "data_size": 63488 00:15:07.477 }, 00:15:07.477 { 00:15:07.477 "name": "BaseBdev4", 00:15:07.477 "uuid": "fb0a9716-80d1-5077-8447-08deda6763ac", 00:15:07.477 "is_configured": true, 00:15:07.477 "data_offset": 2048, 00:15:07.477 "data_size": 63488 00:15:07.477 } 00:15:07.477 ] 00:15:07.477 }' 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.477 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.736 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.736 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78880 00:15:07.736 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78880 ']' 00:15:07.736 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78880 00:15:07.736 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:07.736 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.736 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78880 00:15:07.736 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.736 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.736 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78880' 00:15:07.736 killing process with pid 78880 00:15:07.736 Received shutdown signal, test time was about 17.785907 seconds 00:15:07.736 00:15:07.736 Latency(us) 00:15:07.736 [2024-11-29T07:46:57.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.736 [2024-11-29T07:46:57.681Z] =================================================================================================================== 00:15:07.736 [2024-11-29T07:46:57.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:07.736 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78880 00:15:07.736 [2024-11-29 07:46:57.508959] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.736 [2024-11-29 07:46:57.509091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.736 [2024-11-29 07:46:57.509170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 07:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78880 00:15:07.736 ee all in destruct 00:15:07.736 [2024-11-29 07:46:57.509187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:07.994 [2024-11-29 07:46:57.904575] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.374 07:46:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:09.374 ************************************ 00:15:09.374 END TEST raid_rebuild_test_sb_io 00:15:09.374 ************************************ 00:15:09.374 00:15:09.374 real 0m21.078s 00:15:09.374 user 0m27.493s 00:15:09.374 sys 0m2.476s 00:15:09.374 07:46:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.374 07:46:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.374 07:46:59 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:09.374 07:46:59 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:09.374 07:46:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:09.374 07:46:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.374 07:46:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.374 ************************************ 00:15:09.374 START TEST raid5f_state_function_test 00:15:09.374 ************************************ 00:15:09.374 07:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79596 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79596' 00:15:09.375 Process raid pid: 79596 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79596 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79596 ']' 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.375 07:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.375 [2024-11-29 07:46:59.188380] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:09.375 [2024-11-29 07:46:59.188582] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.635 [2024-11-29 07:46:59.358902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.635 [2024-11-29 07:46:59.466326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.894 [2024-11-29 07:46:59.655334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.894 [2024-11-29 07:46:59.655452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.154 07:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.154 07:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:10.154 07:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.154 [2024-11-29 07:47:00.007926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.154 [2024-11-29 07:47:00.007979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.154 [2024-11-29 07:47:00.007990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.154 [2024-11-29 07:47:00.008000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.154 [2024-11-29 07:47:00.008006] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:10.154 [2024-11-29 07:47:00.008014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.154 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.154 "name": "Existed_Raid", 00:15:10.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.154 "strip_size_kb": 64, 00:15:10.154 "state": "configuring", 00:15:10.155 "raid_level": "raid5f", 00:15:10.155 "superblock": false, 00:15:10.155 "num_base_bdevs": 3, 00:15:10.155 "num_base_bdevs_discovered": 0, 00:15:10.155 "num_base_bdevs_operational": 3, 00:15:10.155 "base_bdevs_list": [ 00:15:10.155 { 00:15:10.155 "name": "BaseBdev1", 00:15:10.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.155 "is_configured": false, 00:15:10.155 "data_offset": 0, 00:15:10.155 "data_size": 0 00:15:10.155 }, 00:15:10.155 { 00:15:10.155 "name": "BaseBdev2", 00:15:10.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.155 "is_configured": false, 00:15:10.155 "data_offset": 0, 00:15:10.155 "data_size": 0 00:15:10.155 }, 00:15:10.155 { 00:15:10.155 "name": "BaseBdev3", 00:15:10.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.155 "is_configured": false, 00:15:10.155 "data_offset": 0, 00:15:10.155 "data_size": 0 00:15:10.155 } 00:15:10.155 ] 00:15:10.155 }' 00:15:10.155 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.155 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.722 [2024-11-29 07:47:00.439116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.722 [2024-11-29 07:47:00.439192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.722 [2024-11-29 07:47:00.451084] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.722 [2024-11-29 07:47:00.451193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.722 [2024-11-29 07:47:00.451225] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.722 [2024-11-29 07:47:00.451249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.722 [2024-11-29 07:47:00.451267] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:10.722 [2024-11-29 07:47:00.451286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.722 [2024-11-29 07:47:00.501475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.722 BaseBdev1 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.722 [ 00:15:10.722 { 00:15:10.722 "name": "BaseBdev1", 00:15:10.722 "aliases": [ 00:15:10.722 "fef24fd2-b69e-47ac-985f-a97c3446e67b" 00:15:10.722 ], 00:15:10.722 "product_name": "Malloc disk", 00:15:10.722 "block_size": 512, 00:15:10.722 "num_blocks": 65536, 00:15:10.722 "uuid": "fef24fd2-b69e-47ac-985f-a97c3446e67b", 00:15:10.722 "assigned_rate_limits": { 00:15:10.722 "rw_ios_per_sec": 0, 00:15:10.722 "rw_mbytes_per_sec": 0, 00:15:10.722 "r_mbytes_per_sec": 0, 00:15:10.722 "w_mbytes_per_sec": 0 00:15:10.722 }, 00:15:10.722 "claimed": true, 00:15:10.722 "claim_type": "exclusive_write", 00:15:10.722 "zoned": false, 00:15:10.722 "supported_io_types": { 00:15:10.722 "read": true, 00:15:10.722 "write": true, 00:15:10.722 "unmap": true, 00:15:10.722 "flush": true, 00:15:10.722 "reset": true, 00:15:10.722 "nvme_admin": false, 00:15:10.722 "nvme_io": false, 00:15:10.722 "nvme_io_md": false, 00:15:10.722 "write_zeroes": true, 00:15:10.722 "zcopy": true, 00:15:10.722 "get_zone_info": false, 00:15:10.722 "zone_management": false, 00:15:10.722 "zone_append": false, 00:15:10.722 "compare": false, 00:15:10.722 "compare_and_write": false, 00:15:10.722 "abort": true, 00:15:10.722 "seek_hole": false, 00:15:10.722 "seek_data": false, 00:15:10.722 "copy": true, 00:15:10.722 "nvme_iov_md": false 00:15:10.722 }, 00:15:10.722 "memory_domains": [ 00:15:10.722 { 00:15:10.722 "dma_device_id": "system", 00:15:10.722 "dma_device_type": 1 00:15:10.722 }, 00:15:10.722 { 00:15:10.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.722 "dma_device_type": 2 00:15:10.722 } 00:15:10.722 ], 00:15:10.722 "driver_specific": {} 00:15:10.722 } 00:15:10.722 ] 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.722 "name": "Existed_Raid", 00:15:10.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.722 "strip_size_kb": 64, 00:15:10.722 "state": "configuring", 00:15:10.722 "raid_level": "raid5f", 00:15:10.722 "superblock": false, 00:15:10.722 "num_base_bdevs": 3, 00:15:10.722 "num_base_bdevs_discovered": 1, 00:15:10.722 "num_base_bdevs_operational": 3, 00:15:10.722 "base_bdevs_list": [ 00:15:10.722 { 00:15:10.722 "name": "BaseBdev1", 00:15:10.722 "uuid": "fef24fd2-b69e-47ac-985f-a97c3446e67b", 00:15:10.722 "is_configured": true, 00:15:10.722 "data_offset": 0, 00:15:10.722 "data_size": 65536 00:15:10.722 }, 00:15:10.722 { 00:15:10.722 "name": "BaseBdev2", 00:15:10.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.722 "is_configured": false, 00:15:10.722 "data_offset": 0, 00:15:10.722 "data_size": 0 00:15:10.722 }, 00:15:10.722 { 00:15:10.722 "name": "BaseBdev3", 00:15:10.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.722 "is_configured": false, 00:15:10.722 "data_offset": 0, 00:15:10.722 "data_size": 0 00:15:10.722 } 00:15:10.722 ] 00:15:10.722 }' 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.722 07:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.293 [2024-11-29 07:47:01.024605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.293 [2024-11-29 07:47:01.024649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.293 [2024-11-29 07:47:01.036636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.293 [2024-11-29 07:47:01.038393] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.293 [2024-11-29 07:47:01.038489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.293 [2024-11-29 07:47:01.038504] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:11.293 [2024-11-29 07:47:01.038513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.293 "name": "Existed_Raid", 00:15:11.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.293 "strip_size_kb": 64, 00:15:11.293 "state": "configuring", 00:15:11.293 "raid_level": "raid5f", 00:15:11.293 "superblock": false, 00:15:11.293 "num_base_bdevs": 3, 00:15:11.293 "num_base_bdevs_discovered": 1, 00:15:11.293 "num_base_bdevs_operational": 3, 00:15:11.293 "base_bdevs_list": [ 00:15:11.293 { 00:15:11.293 "name": "BaseBdev1", 00:15:11.293 "uuid": "fef24fd2-b69e-47ac-985f-a97c3446e67b", 00:15:11.293 "is_configured": true, 00:15:11.293 "data_offset": 0, 00:15:11.293 "data_size": 65536 00:15:11.293 }, 00:15:11.293 { 00:15:11.293 "name": "BaseBdev2", 00:15:11.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.293 "is_configured": false, 00:15:11.293 "data_offset": 0, 00:15:11.293 "data_size": 0 00:15:11.293 }, 00:15:11.293 { 00:15:11.293 "name": "BaseBdev3", 00:15:11.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.293 "is_configured": false, 00:15:11.293 "data_offset": 0, 00:15:11.293 "data_size": 0 00:15:11.293 } 00:15:11.293 ] 00:15:11.293 }' 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.293 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.862 [2024-11-29 07:47:01.547299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:11.862 BaseBdev2 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.862 [ 00:15:11.862 { 00:15:11.862 "name": "BaseBdev2", 00:15:11.862 "aliases": [ 00:15:11.862 "7a26a0a2-31e8-4479-a4c4-490373eff158" 00:15:11.862 ], 00:15:11.862 "product_name": "Malloc disk", 00:15:11.862 "block_size": 512, 00:15:11.862 "num_blocks": 65536, 00:15:11.862 "uuid": "7a26a0a2-31e8-4479-a4c4-490373eff158", 00:15:11.862 "assigned_rate_limits": { 00:15:11.862 "rw_ios_per_sec": 0, 00:15:11.862 "rw_mbytes_per_sec": 0, 00:15:11.862 "r_mbytes_per_sec": 0, 00:15:11.862 "w_mbytes_per_sec": 0 00:15:11.862 }, 00:15:11.862 "claimed": true, 00:15:11.862 "claim_type": "exclusive_write", 00:15:11.862 "zoned": false, 00:15:11.862 "supported_io_types": { 00:15:11.862 "read": true, 00:15:11.862 "write": true, 00:15:11.862 "unmap": true, 00:15:11.862 "flush": true, 00:15:11.862 "reset": true, 00:15:11.862 "nvme_admin": false, 00:15:11.862 "nvme_io": false, 00:15:11.862 "nvme_io_md": false, 00:15:11.862 "write_zeroes": true, 00:15:11.862 "zcopy": true, 00:15:11.862 "get_zone_info": false, 00:15:11.862 "zone_management": false, 00:15:11.862 "zone_append": false, 00:15:11.862 "compare": false, 00:15:11.862 "compare_and_write": false, 00:15:11.862 "abort": true, 00:15:11.862 "seek_hole": false, 00:15:11.862 "seek_data": false, 00:15:11.862 "copy": true, 00:15:11.862 "nvme_iov_md": false 00:15:11.862 }, 00:15:11.862 "memory_domains": [ 00:15:11.862 { 00:15:11.862 "dma_device_id": "system", 00:15:11.862 "dma_device_type": 1 00:15:11.862 }, 00:15:11.862 { 00:15:11.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.862 "dma_device_type": 2 00:15:11.862 } 00:15:11.862 ], 00:15:11.862 "driver_specific": {} 00:15:11.862 } 00:15:11.862 ] 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.862 "name": "Existed_Raid", 00:15:11.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.862 "strip_size_kb": 64, 00:15:11.862 "state": "configuring", 00:15:11.862 "raid_level": "raid5f", 00:15:11.862 "superblock": false, 00:15:11.862 "num_base_bdevs": 3, 00:15:11.862 "num_base_bdevs_discovered": 2, 00:15:11.862 "num_base_bdevs_operational": 3, 00:15:11.862 "base_bdevs_list": [ 00:15:11.862 { 00:15:11.862 "name": "BaseBdev1", 00:15:11.862 "uuid": "fef24fd2-b69e-47ac-985f-a97c3446e67b", 00:15:11.862 "is_configured": true, 00:15:11.862 "data_offset": 0, 00:15:11.862 "data_size": 65536 00:15:11.862 }, 00:15:11.862 { 00:15:11.862 "name": "BaseBdev2", 00:15:11.862 "uuid": "7a26a0a2-31e8-4479-a4c4-490373eff158", 00:15:11.862 "is_configured": true, 00:15:11.862 "data_offset": 0, 00:15:11.862 "data_size": 65536 00:15:11.862 }, 00:15:11.862 { 00:15:11.862 "name": "BaseBdev3", 00:15:11.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.862 "is_configured": false, 00:15:11.862 "data_offset": 0, 00:15:11.862 "data_size": 0 00:15:11.862 } 00:15:11.862 ] 00:15:11.862 }' 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.862 07:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.121 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:12.121 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.121 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.121 [2024-11-29 07:47:02.061818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.121 [2024-11-29 07:47:02.061876] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:12.122 [2024-11-29 07:47:02.061890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:12.122 [2024-11-29 07:47:02.062182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:12.381 [2024-11-29 07:47:02.068026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:12.381 [2024-11-29 07:47:02.068061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:12.381 [2024-11-29 07:47:02.068365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.381 BaseBdev3 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.381 [ 00:15:12.381 { 00:15:12.381 "name": "BaseBdev3", 00:15:12.381 "aliases": [ 00:15:12.381 "71d44444-34cc-442c-b9b5-f130850ac1b3" 00:15:12.381 ], 00:15:12.381 "product_name": "Malloc disk", 00:15:12.381 "block_size": 512, 00:15:12.381 "num_blocks": 65536, 00:15:12.381 "uuid": "71d44444-34cc-442c-b9b5-f130850ac1b3", 00:15:12.381 "assigned_rate_limits": { 00:15:12.381 "rw_ios_per_sec": 0, 00:15:12.381 "rw_mbytes_per_sec": 0, 00:15:12.381 "r_mbytes_per_sec": 0, 00:15:12.381 "w_mbytes_per_sec": 0 00:15:12.381 }, 00:15:12.381 "claimed": true, 00:15:12.381 "claim_type": "exclusive_write", 00:15:12.381 "zoned": false, 00:15:12.381 "supported_io_types": { 00:15:12.381 "read": true, 00:15:12.381 "write": true, 00:15:12.381 "unmap": true, 00:15:12.381 "flush": true, 00:15:12.381 "reset": true, 00:15:12.381 "nvme_admin": false, 00:15:12.381 "nvme_io": false, 00:15:12.381 "nvme_io_md": false, 00:15:12.381 "write_zeroes": true, 00:15:12.381 "zcopy": true, 00:15:12.381 "get_zone_info": false, 00:15:12.381 "zone_management": false, 00:15:12.381 "zone_append": false, 00:15:12.381 "compare": false, 00:15:12.381 "compare_and_write": false, 00:15:12.381 "abort": true, 00:15:12.381 "seek_hole": false, 00:15:12.381 "seek_data": false, 00:15:12.381 "copy": true, 00:15:12.381 "nvme_iov_md": false 00:15:12.381 }, 00:15:12.381 "memory_domains": [ 00:15:12.381 { 00:15:12.381 "dma_device_id": "system", 00:15:12.381 "dma_device_type": 1 00:15:12.381 }, 00:15:12.381 { 00:15:12.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.381 "dma_device_type": 2 00:15:12.381 } 00:15:12.381 ], 00:15:12.381 "driver_specific": {} 00:15:12.381 } 00:15:12.381 ] 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.381 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.382 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.382 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.382 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.382 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.382 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.382 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.382 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.382 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.382 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.382 "name": "Existed_Raid", 00:15:12.382 "uuid": "e11d3692-726c-4174-a0f8-2af72e92a3b9", 00:15:12.382 "strip_size_kb": 64, 00:15:12.382 "state": "online", 00:15:12.382 "raid_level": "raid5f", 00:15:12.382 "superblock": false, 00:15:12.382 "num_base_bdevs": 3, 00:15:12.382 "num_base_bdevs_discovered": 3, 00:15:12.382 "num_base_bdevs_operational": 3, 00:15:12.382 "base_bdevs_list": [ 00:15:12.382 { 00:15:12.382 "name": "BaseBdev1", 00:15:12.382 "uuid": "fef24fd2-b69e-47ac-985f-a97c3446e67b", 00:15:12.382 "is_configured": true, 00:15:12.382 "data_offset": 0, 00:15:12.382 "data_size": 65536 00:15:12.382 }, 00:15:12.382 { 00:15:12.382 "name": "BaseBdev2", 00:15:12.382 "uuid": "7a26a0a2-31e8-4479-a4c4-490373eff158", 00:15:12.382 "is_configured": true, 00:15:12.382 "data_offset": 0, 00:15:12.382 "data_size": 65536 00:15:12.382 }, 00:15:12.382 { 00:15:12.382 "name": "BaseBdev3", 00:15:12.382 "uuid": "71d44444-34cc-442c-b9b5-f130850ac1b3", 00:15:12.382 "is_configured": true, 00:15:12.382 "data_offset": 0, 00:15:12.382 "data_size": 65536 00:15:12.382 } 00:15:12.382 ] 00:15:12.382 }' 00:15:12.382 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.382 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.642 [2024-11-29 07:47:02.549841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:12.642 "name": "Existed_Raid", 00:15:12.642 "aliases": [ 00:15:12.642 "e11d3692-726c-4174-a0f8-2af72e92a3b9" 00:15:12.642 ], 00:15:12.642 "product_name": "Raid Volume", 00:15:12.642 "block_size": 512, 00:15:12.642 "num_blocks": 131072, 00:15:12.642 "uuid": "e11d3692-726c-4174-a0f8-2af72e92a3b9", 00:15:12.642 "assigned_rate_limits": { 00:15:12.642 "rw_ios_per_sec": 0, 00:15:12.642 "rw_mbytes_per_sec": 0, 00:15:12.642 "r_mbytes_per_sec": 0, 00:15:12.642 "w_mbytes_per_sec": 0 00:15:12.642 }, 00:15:12.642 "claimed": false, 00:15:12.642 "zoned": false, 00:15:12.642 "supported_io_types": { 00:15:12.642 "read": true, 00:15:12.642 "write": true, 00:15:12.642 "unmap": false, 00:15:12.642 "flush": false, 00:15:12.642 "reset": true, 00:15:12.642 "nvme_admin": false, 00:15:12.642 "nvme_io": false, 00:15:12.642 "nvme_io_md": false, 00:15:12.642 "write_zeroes": true, 00:15:12.642 "zcopy": false, 00:15:12.642 "get_zone_info": false, 00:15:12.642 "zone_management": false, 00:15:12.642 "zone_append": false, 00:15:12.642 "compare": false, 00:15:12.642 "compare_and_write": false, 00:15:12.642 "abort": false, 00:15:12.642 "seek_hole": false, 00:15:12.642 "seek_data": false, 00:15:12.642 "copy": false, 00:15:12.642 "nvme_iov_md": false 00:15:12.642 }, 00:15:12.642 "driver_specific": { 00:15:12.642 "raid": { 00:15:12.642 "uuid": "e11d3692-726c-4174-a0f8-2af72e92a3b9", 00:15:12.642 "strip_size_kb": 64, 00:15:12.642 "state": "online", 00:15:12.642 "raid_level": "raid5f", 00:15:12.642 "superblock": false, 00:15:12.642 "num_base_bdevs": 3, 00:15:12.642 "num_base_bdevs_discovered": 3, 00:15:12.642 "num_base_bdevs_operational": 3, 00:15:12.642 "base_bdevs_list": [ 00:15:12.642 { 00:15:12.642 "name": "BaseBdev1", 00:15:12.642 "uuid": "fef24fd2-b69e-47ac-985f-a97c3446e67b", 00:15:12.642 "is_configured": true, 00:15:12.642 "data_offset": 0, 00:15:12.642 "data_size": 65536 00:15:12.642 }, 00:15:12.642 { 00:15:12.642 "name": "BaseBdev2", 00:15:12.642 "uuid": "7a26a0a2-31e8-4479-a4c4-490373eff158", 00:15:12.642 "is_configured": true, 00:15:12.642 "data_offset": 0, 00:15:12.642 "data_size": 65536 00:15:12.642 }, 00:15:12.642 { 00:15:12.642 "name": "BaseBdev3", 00:15:12.642 "uuid": "71d44444-34cc-442c-b9b5-f130850ac1b3", 00:15:12.642 "is_configured": true, 00:15:12.642 "data_offset": 0, 00:15:12.642 "data_size": 65536 00:15:12.642 } 00:15:12.642 ] 00:15:12.642 } 00:15:12.642 } 00:15:12.642 }' 00:15:12.642 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:12.902 BaseBdev2 00:15:12.902 BaseBdev3' 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.902 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.902 [2024-11-29 07:47:02.785239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.162 "name": "Existed_Raid", 00:15:13.162 "uuid": "e11d3692-726c-4174-a0f8-2af72e92a3b9", 00:15:13.162 "strip_size_kb": 64, 00:15:13.162 "state": "online", 00:15:13.162 "raid_level": "raid5f", 00:15:13.162 "superblock": false, 00:15:13.162 "num_base_bdevs": 3, 00:15:13.162 "num_base_bdevs_discovered": 2, 00:15:13.162 "num_base_bdevs_operational": 2, 00:15:13.162 "base_bdevs_list": [ 00:15:13.162 { 00:15:13.162 "name": null, 00:15:13.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.162 "is_configured": false, 00:15:13.162 "data_offset": 0, 00:15:13.162 "data_size": 65536 00:15:13.162 }, 00:15:13.162 { 00:15:13.162 "name": "BaseBdev2", 00:15:13.162 "uuid": "7a26a0a2-31e8-4479-a4c4-490373eff158", 00:15:13.162 "is_configured": true, 00:15:13.162 "data_offset": 0, 00:15:13.162 "data_size": 65536 00:15:13.162 }, 00:15:13.162 { 00:15:13.162 "name": "BaseBdev3", 00:15:13.162 "uuid": "71d44444-34cc-442c-b9b5-f130850ac1b3", 00:15:13.162 "is_configured": true, 00:15:13.162 "data_offset": 0, 00:15:13.162 "data_size": 65536 00:15:13.162 } 00:15:13.162 ] 00:15:13.162 }' 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.162 07:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.420 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:13.420 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.420 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.420 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:13.420 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.420 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.420 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.679 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:13.679 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.679 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:13.679 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.679 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.679 [2024-11-29 07:47:03.391067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:13.679 [2024-11-29 07:47:03.391182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.679 [2024-11-29 07:47:03.482272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.679 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.679 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:13.679 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.680 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:13.680 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.680 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.680 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.680 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.680 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:13.680 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.680 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:13.680 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.680 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.680 [2024-11-29 07:47:03.542232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:13.680 [2024-11-29 07:47:03.542280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.940 BaseBdev2 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.940 [ 00:15:13.940 { 00:15:13.940 "name": "BaseBdev2", 00:15:13.940 "aliases": [ 00:15:13.940 "c78affab-58bf-4ca5-8ee6-4f38edc26508" 00:15:13.940 ], 00:15:13.940 "product_name": "Malloc disk", 00:15:13.940 "block_size": 512, 00:15:13.940 "num_blocks": 65536, 00:15:13.940 "uuid": "c78affab-58bf-4ca5-8ee6-4f38edc26508", 00:15:13.940 "assigned_rate_limits": { 00:15:13.940 "rw_ios_per_sec": 0, 00:15:13.940 "rw_mbytes_per_sec": 0, 00:15:13.940 "r_mbytes_per_sec": 0, 00:15:13.940 "w_mbytes_per_sec": 0 00:15:13.940 }, 00:15:13.940 "claimed": false, 00:15:13.940 "zoned": false, 00:15:13.940 "supported_io_types": { 00:15:13.940 "read": true, 00:15:13.940 "write": true, 00:15:13.940 "unmap": true, 00:15:13.940 "flush": true, 00:15:13.940 "reset": true, 00:15:13.940 "nvme_admin": false, 00:15:13.940 "nvme_io": false, 00:15:13.940 "nvme_io_md": false, 00:15:13.940 "write_zeroes": true, 00:15:13.940 "zcopy": true, 00:15:13.940 "get_zone_info": false, 00:15:13.940 "zone_management": false, 00:15:13.940 "zone_append": false, 00:15:13.940 "compare": false, 00:15:13.940 "compare_and_write": false, 00:15:13.940 "abort": true, 00:15:13.940 "seek_hole": false, 00:15:13.940 "seek_data": false, 00:15:13.940 "copy": true, 00:15:13.940 "nvme_iov_md": false 00:15:13.940 }, 00:15:13.940 "memory_domains": [ 00:15:13.940 { 00:15:13.940 "dma_device_id": "system", 00:15:13.940 "dma_device_type": 1 00:15:13.940 }, 00:15:13.940 { 00:15:13.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.940 "dma_device_type": 2 00:15:13.940 } 00:15:13.940 ], 00:15:13.940 "driver_specific": {} 00:15:13.940 } 00:15:13.940 ] 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:13.940 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.941 BaseBdev3 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.941 [ 00:15:13.941 { 00:15:13.941 "name": "BaseBdev3", 00:15:13.941 "aliases": [ 00:15:13.941 "746f02d3-5fc8-424b-93df-a77155757606" 00:15:13.941 ], 00:15:13.941 "product_name": "Malloc disk", 00:15:13.941 "block_size": 512, 00:15:13.941 "num_blocks": 65536, 00:15:13.941 "uuid": "746f02d3-5fc8-424b-93df-a77155757606", 00:15:13.941 "assigned_rate_limits": { 00:15:13.941 "rw_ios_per_sec": 0, 00:15:13.941 "rw_mbytes_per_sec": 0, 00:15:13.941 "r_mbytes_per_sec": 0, 00:15:13.941 "w_mbytes_per_sec": 0 00:15:13.941 }, 00:15:13.941 "claimed": false, 00:15:13.941 "zoned": false, 00:15:13.941 "supported_io_types": { 00:15:13.941 "read": true, 00:15:13.941 "write": true, 00:15:13.941 "unmap": true, 00:15:13.941 "flush": true, 00:15:13.941 "reset": true, 00:15:13.941 "nvme_admin": false, 00:15:13.941 "nvme_io": false, 00:15:13.941 "nvme_io_md": false, 00:15:13.941 "write_zeroes": true, 00:15:13.941 "zcopy": true, 00:15:13.941 "get_zone_info": false, 00:15:13.941 "zone_management": false, 00:15:13.941 "zone_append": false, 00:15:13.941 "compare": false, 00:15:13.941 "compare_and_write": false, 00:15:13.941 "abort": true, 00:15:13.941 "seek_hole": false, 00:15:13.941 "seek_data": false, 00:15:13.941 "copy": true, 00:15:13.941 "nvme_iov_md": false 00:15:13.941 }, 00:15:13.941 "memory_domains": [ 00:15:13.941 { 00:15:13.941 "dma_device_id": "system", 00:15:13.941 "dma_device_type": 1 00:15:13.941 }, 00:15:13.941 { 00:15:13.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.941 "dma_device_type": 2 00:15:13.941 } 00:15:13.941 ], 00:15:13.941 "driver_specific": {} 00:15:13.941 } 00:15:13.941 ] 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.941 [2024-11-29 07:47:03.846405] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.941 [2024-11-29 07:47:03.846448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.941 [2024-11-29 07:47:03.846485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.941 [2024-11-29 07:47:03.848184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.941 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.201 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.201 "name": "Existed_Raid", 00:15:14.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.201 "strip_size_kb": 64, 00:15:14.201 "state": "configuring", 00:15:14.201 "raid_level": "raid5f", 00:15:14.201 "superblock": false, 00:15:14.201 "num_base_bdevs": 3, 00:15:14.201 "num_base_bdevs_discovered": 2, 00:15:14.201 "num_base_bdevs_operational": 3, 00:15:14.201 "base_bdevs_list": [ 00:15:14.201 { 00:15:14.201 "name": "BaseBdev1", 00:15:14.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.201 "is_configured": false, 00:15:14.201 "data_offset": 0, 00:15:14.201 "data_size": 0 00:15:14.201 }, 00:15:14.201 { 00:15:14.201 "name": "BaseBdev2", 00:15:14.201 "uuid": "c78affab-58bf-4ca5-8ee6-4f38edc26508", 00:15:14.201 "is_configured": true, 00:15:14.201 "data_offset": 0, 00:15:14.201 "data_size": 65536 00:15:14.201 }, 00:15:14.201 { 00:15:14.201 "name": "BaseBdev3", 00:15:14.201 "uuid": "746f02d3-5fc8-424b-93df-a77155757606", 00:15:14.201 "is_configured": true, 00:15:14.201 "data_offset": 0, 00:15:14.201 "data_size": 65536 00:15:14.201 } 00:15:14.201 ] 00:15:14.201 }' 00:15:14.201 07:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.201 07:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.461 [2024-11-29 07:47:04.249712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.461 "name": "Existed_Raid", 00:15:14.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.461 "strip_size_kb": 64, 00:15:14.461 "state": "configuring", 00:15:14.461 "raid_level": "raid5f", 00:15:14.461 "superblock": false, 00:15:14.461 "num_base_bdevs": 3, 00:15:14.461 "num_base_bdevs_discovered": 1, 00:15:14.461 "num_base_bdevs_operational": 3, 00:15:14.461 "base_bdevs_list": [ 00:15:14.461 { 00:15:14.461 "name": "BaseBdev1", 00:15:14.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.461 "is_configured": false, 00:15:14.461 "data_offset": 0, 00:15:14.461 "data_size": 0 00:15:14.461 }, 00:15:14.461 { 00:15:14.461 "name": null, 00:15:14.461 "uuid": "c78affab-58bf-4ca5-8ee6-4f38edc26508", 00:15:14.461 "is_configured": false, 00:15:14.461 "data_offset": 0, 00:15:14.461 "data_size": 65536 00:15:14.461 }, 00:15:14.461 { 00:15:14.461 "name": "BaseBdev3", 00:15:14.461 "uuid": "746f02d3-5fc8-424b-93df-a77155757606", 00:15:14.461 "is_configured": true, 00:15:14.461 "data_offset": 0, 00:15:14.461 "data_size": 65536 00:15:14.461 } 00:15:14.461 ] 00:15:14.461 }' 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.461 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.030 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.030 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.030 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.030 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:15.030 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.030 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:15.030 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:15.030 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.030 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.030 [2024-11-29 07:47:04.778765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.030 BaseBdev1 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.031 [ 00:15:15.031 { 00:15:15.031 "name": "BaseBdev1", 00:15:15.031 "aliases": [ 00:15:15.031 "6c63ad27-21d2-4deb-befb-31a9296acf5f" 00:15:15.031 ], 00:15:15.031 "product_name": "Malloc disk", 00:15:15.031 "block_size": 512, 00:15:15.031 "num_blocks": 65536, 00:15:15.031 "uuid": "6c63ad27-21d2-4deb-befb-31a9296acf5f", 00:15:15.031 "assigned_rate_limits": { 00:15:15.031 "rw_ios_per_sec": 0, 00:15:15.031 "rw_mbytes_per_sec": 0, 00:15:15.031 "r_mbytes_per_sec": 0, 00:15:15.031 "w_mbytes_per_sec": 0 00:15:15.031 }, 00:15:15.031 "claimed": true, 00:15:15.031 "claim_type": "exclusive_write", 00:15:15.031 "zoned": false, 00:15:15.031 "supported_io_types": { 00:15:15.031 "read": true, 00:15:15.031 "write": true, 00:15:15.031 "unmap": true, 00:15:15.031 "flush": true, 00:15:15.031 "reset": true, 00:15:15.031 "nvme_admin": false, 00:15:15.031 "nvme_io": false, 00:15:15.031 "nvme_io_md": false, 00:15:15.031 "write_zeroes": true, 00:15:15.031 "zcopy": true, 00:15:15.031 "get_zone_info": false, 00:15:15.031 "zone_management": false, 00:15:15.031 "zone_append": false, 00:15:15.031 "compare": false, 00:15:15.031 "compare_and_write": false, 00:15:15.031 "abort": true, 00:15:15.031 "seek_hole": false, 00:15:15.031 "seek_data": false, 00:15:15.031 "copy": true, 00:15:15.031 "nvme_iov_md": false 00:15:15.031 }, 00:15:15.031 "memory_domains": [ 00:15:15.031 { 00:15:15.031 "dma_device_id": "system", 00:15:15.031 "dma_device_type": 1 00:15:15.031 }, 00:15:15.031 { 00:15:15.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.031 "dma_device_type": 2 00:15:15.031 } 00:15:15.031 ], 00:15:15.031 "driver_specific": {} 00:15:15.031 } 00:15:15.031 ] 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.031 "name": "Existed_Raid", 00:15:15.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.031 "strip_size_kb": 64, 00:15:15.031 "state": "configuring", 00:15:15.031 "raid_level": "raid5f", 00:15:15.031 "superblock": false, 00:15:15.031 "num_base_bdevs": 3, 00:15:15.031 "num_base_bdevs_discovered": 2, 00:15:15.031 "num_base_bdevs_operational": 3, 00:15:15.031 "base_bdevs_list": [ 00:15:15.031 { 00:15:15.031 "name": "BaseBdev1", 00:15:15.031 "uuid": "6c63ad27-21d2-4deb-befb-31a9296acf5f", 00:15:15.031 "is_configured": true, 00:15:15.031 "data_offset": 0, 00:15:15.031 "data_size": 65536 00:15:15.031 }, 00:15:15.031 { 00:15:15.031 "name": null, 00:15:15.031 "uuid": "c78affab-58bf-4ca5-8ee6-4f38edc26508", 00:15:15.031 "is_configured": false, 00:15:15.031 "data_offset": 0, 00:15:15.031 "data_size": 65536 00:15:15.031 }, 00:15:15.031 { 00:15:15.031 "name": "BaseBdev3", 00:15:15.031 "uuid": "746f02d3-5fc8-424b-93df-a77155757606", 00:15:15.031 "is_configured": true, 00:15:15.031 "data_offset": 0, 00:15:15.031 "data_size": 65536 00:15:15.031 } 00:15:15.031 ] 00:15:15.031 }' 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.031 07:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.598 [2024-11-29 07:47:05.297901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.598 "name": "Existed_Raid", 00:15:15.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.598 "strip_size_kb": 64, 00:15:15.598 "state": "configuring", 00:15:15.598 "raid_level": "raid5f", 00:15:15.598 "superblock": false, 00:15:15.598 "num_base_bdevs": 3, 00:15:15.598 "num_base_bdevs_discovered": 1, 00:15:15.598 "num_base_bdevs_operational": 3, 00:15:15.598 "base_bdevs_list": [ 00:15:15.598 { 00:15:15.598 "name": "BaseBdev1", 00:15:15.598 "uuid": "6c63ad27-21d2-4deb-befb-31a9296acf5f", 00:15:15.598 "is_configured": true, 00:15:15.598 "data_offset": 0, 00:15:15.598 "data_size": 65536 00:15:15.598 }, 00:15:15.598 { 00:15:15.598 "name": null, 00:15:15.598 "uuid": "c78affab-58bf-4ca5-8ee6-4f38edc26508", 00:15:15.598 "is_configured": false, 00:15:15.598 "data_offset": 0, 00:15:15.598 "data_size": 65536 00:15:15.598 }, 00:15:15.598 { 00:15:15.598 "name": null, 00:15:15.598 "uuid": "746f02d3-5fc8-424b-93df-a77155757606", 00:15:15.598 "is_configured": false, 00:15:15.598 "data_offset": 0, 00:15:15.598 "data_size": 65536 00:15:15.598 } 00:15:15.598 ] 00:15:15.598 }' 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.598 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.858 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.858 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:15.858 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.858 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.858 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.858 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:15.858 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:15.858 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.858 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.118 [2024-11-29 07:47:05.805081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.118 "name": "Existed_Raid", 00:15:16.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.118 "strip_size_kb": 64, 00:15:16.118 "state": "configuring", 00:15:16.118 "raid_level": "raid5f", 00:15:16.118 "superblock": false, 00:15:16.118 "num_base_bdevs": 3, 00:15:16.118 "num_base_bdevs_discovered": 2, 00:15:16.118 "num_base_bdevs_operational": 3, 00:15:16.118 "base_bdevs_list": [ 00:15:16.118 { 00:15:16.118 "name": "BaseBdev1", 00:15:16.118 "uuid": "6c63ad27-21d2-4deb-befb-31a9296acf5f", 00:15:16.118 "is_configured": true, 00:15:16.118 "data_offset": 0, 00:15:16.118 "data_size": 65536 00:15:16.118 }, 00:15:16.118 { 00:15:16.118 "name": null, 00:15:16.118 "uuid": "c78affab-58bf-4ca5-8ee6-4f38edc26508", 00:15:16.118 "is_configured": false, 00:15:16.118 "data_offset": 0, 00:15:16.118 "data_size": 65536 00:15:16.118 }, 00:15:16.118 { 00:15:16.118 "name": "BaseBdev3", 00:15:16.118 "uuid": "746f02d3-5fc8-424b-93df-a77155757606", 00:15:16.118 "is_configured": true, 00:15:16.118 "data_offset": 0, 00:15:16.118 "data_size": 65536 00:15:16.118 } 00:15:16.118 ] 00:15:16.118 }' 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.118 07:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.378 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.378 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.378 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.378 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:16.378 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.378 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:16.378 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:16.378 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.378 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.378 [2024-11-29 07:47:06.292239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.638 "name": "Existed_Raid", 00:15:16.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.638 "strip_size_kb": 64, 00:15:16.638 "state": "configuring", 00:15:16.638 "raid_level": "raid5f", 00:15:16.638 "superblock": false, 00:15:16.638 "num_base_bdevs": 3, 00:15:16.638 "num_base_bdevs_discovered": 1, 00:15:16.638 "num_base_bdevs_operational": 3, 00:15:16.638 "base_bdevs_list": [ 00:15:16.638 { 00:15:16.638 "name": null, 00:15:16.638 "uuid": "6c63ad27-21d2-4deb-befb-31a9296acf5f", 00:15:16.638 "is_configured": false, 00:15:16.638 "data_offset": 0, 00:15:16.638 "data_size": 65536 00:15:16.638 }, 00:15:16.638 { 00:15:16.638 "name": null, 00:15:16.638 "uuid": "c78affab-58bf-4ca5-8ee6-4f38edc26508", 00:15:16.638 "is_configured": false, 00:15:16.638 "data_offset": 0, 00:15:16.638 "data_size": 65536 00:15:16.638 }, 00:15:16.638 { 00:15:16.638 "name": "BaseBdev3", 00:15:16.638 "uuid": "746f02d3-5fc8-424b-93df-a77155757606", 00:15:16.638 "is_configured": true, 00:15:16.638 "data_offset": 0, 00:15:16.638 "data_size": 65536 00:15:16.638 } 00:15:16.638 ] 00:15:16.638 }' 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.638 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.899 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:16.899 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.899 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.899 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.159 [2024-11-29 07:47:06.863344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.159 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.160 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.160 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.160 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.160 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.160 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.160 "name": "Existed_Raid", 00:15:17.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.160 "strip_size_kb": 64, 00:15:17.160 "state": "configuring", 00:15:17.160 "raid_level": "raid5f", 00:15:17.160 "superblock": false, 00:15:17.160 "num_base_bdevs": 3, 00:15:17.160 "num_base_bdevs_discovered": 2, 00:15:17.160 "num_base_bdevs_operational": 3, 00:15:17.160 "base_bdevs_list": [ 00:15:17.160 { 00:15:17.160 "name": null, 00:15:17.160 "uuid": "6c63ad27-21d2-4deb-befb-31a9296acf5f", 00:15:17.160 "is_configured": false, 00:15:17.160 "data_offset": 0, 00:15:17.160 "data_size": 65536 00:15:17.160 }, 00:15:17.160 { 00:15:17.160 "name": "BaseBdev2", 00:15:17.160 "uuid": "c78affab-58bf-4ca5-8ee6-4f38edc26508", 00:15:17.160 "is_configured": true, 00:15:17.160 "data_offset": 0, 00:15:17.160 "data_size": 65536 00:15:17.160 }, 00:15:17.160 { 00:15:17.160 "name": "BaseBdev3", 00:15:17.160 "uuid": "746f02d3-5fc8-424b-93df-a77155757606", 00:15:17.160 "is_configured": true, 00:15:17.160 "data_offset": 0, 00:15:17.160 "data_size": 65536 00:15:17.160 } 00:15:17.160 ] 00:15:17.160 }' 00:15:17.160 07:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.160 07:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.419 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:17.419 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.419 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.419 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.419 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.419 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:17.419 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.419 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:17.419 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.419 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.678 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.678 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6c63ad27-21d2-4deb-befb-31a9296acf5f 00:15:17.678 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.678 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.678 [2024-11-29 07:47:07.457631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:17.678 [2024-11-29 07:47:07.457676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:17.678 [2024-11-29 07:47:07.457685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:17.678 [2024-11-29 07:47:07.457907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:17.678 [2024-11-29 07:47:07.463005] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:17.678 [2024-11-29 07:47:07.463028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:17.678 [2024-11-29 07:47:07.463321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.678 NewBaseBdev 00:15:17.678 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.678 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:17.678 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:17.678 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:17.678 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:17.678 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:17.678 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:17.678 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.679 [ 00:15:17.679 { 00:15:17.679 "name": "NewBaseBdev", 00:15:17.679 "aliases": [ 00:15:17.679 "6c63ad27-21d2-4deb-befb-31a9296acf5f" 00:15:17.679 ], 00:15:17.679 "product_name": "Malloc disk", 00:15:17.679 "block_size": 512, 00:15:17.679 "num_blocks": 65536, 00:15:17.679 "uuid": "6c63ad27-21d2-4deb-befb-31a9296acf5f", 00:15:17.679 "assigned_rate_limits": { 00:15:17.679 "rw_ios_per_sec": 0, 00:15:17.679 "rw_mbytes_per_sec": 0, 00:15:17.679 "r_mbytes_per_sec": 0, 00:15:17.679 "w_mbytes_per_sec": 0 00:15:17.679 }, 00:15:17.679 "claimed": true, 00:15:17.679 "claim_type": "exclusive_write", 00:15:17.679 "zoned": false, 00:15:17.679 "supported_io_types": { 00:15:17.679 "read": true, 00:15:17.679 "write": true, 00:15:17.679 "unmap": true, 00:15:17.679 "flush": true, 00:15:17.679 "reset": true, 00:15:17.679 "nvme_admin": false, 00:15:17.679 "nvme_io": false, 00:15:17.679 "nvme_io_md": false, 00:15:17.679 "write_zeroes": true, 00:15:17.679 "zcopy": true, 00:15:17.679 "get_zone_info": false, 00:15:17.679 "zone_management": false, 00:15:17.679 "zone_append": false, 00:15:17.679 "compare": false, 00:15:17.679 "compare_and_write": false, 00:15:17.679 "abort": true, 00:15:17.679 "seek_hole": false, 00:15:17.679 "seek_data": false, 00:15:17.679 "copy": true, 00:15:17.679 "nvme_iov_md": false 00:15:17.679 }, 00:15:17.679 "memory_domains": [ 00:15:17.679 { 00:15:17.679 "dma_device_id": "system", 00:15:17.679 "dma_device_type": 1 00:15:17.679 }, 00:15:17.679 { 00:15:17.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.679 "dma_device_type": 2 00:15:17.679 } 00:15:17.679 ], 00:15:17.679 "driver_specific": {} 00:15:17.679 } 00:15:17.679 ] 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.679 "name": "Existed_Raid", 00:15:17.679 "uuid": "7485fd79-5d84-44e1-87cc-a9c4e068dbda", 00:15:17.679 "strip_size_kb": 64, 00:15:17.679 "state": "online", 00:15:17.679 "raid_level": "raid5f", 00:15:17.679 "superblock": false, 00:15:17.679 "num_base_bdevs": 3, 00:15:17.679 "num_base_bdevs_discovered": 3, 00:15:17.679 "num_base_bdevs_operational": 3, 00:15:17.679 "base_bdevs_list": [ 00:15:17.679 { 00:15:17.679 "name": "NewBaseBdev", 00:15:17.679 "uuid": "6c63ad27-21d2-4deb-befb-31a9296acf5f", 00:15:17.679 "is_configured": true, 00:15:17.679 "data_offset": 0, 00:15:17.679 "data_size": 65536 00:15:17.679 }, 00:15:17.679 { 00:15:17.679 "name": "BaseBdev2", 00:15:17.679 "uuid": "c78affab-58bf-4ca5-8ee6-4f38edc26508", 00:15:17.679 "is_configured": true, 00:15:17.679 "data_offset": 0, 00:15:17.679 "data_size": 65536 00:15:17.679 }, 00:15:17.679 { 00:15:17.679 "name": "BaseBdev3", 00:15:17.679 "uuid": "746f02d3-5fc8-424b-93df-a77155757606", 00:15:17.679 "is_configured": true, 00:15:17.679 "data_offset": 0, 00:15:17.679 "data_size": 65536 00:15:17.679 } 00:15:17.679 ] 00:15:17.679 }' 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.679 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.247 [2024-11-29 07:47:07.925061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:18.247 "name": "Existed_Raid", 00:15:18.247 "aliases": [ 00:15:18.247 "7485fd79-5d84-44e1-87cc-a9c4e068dbda" 00:15:18.247 ], 00:15:18.247 "product_name": "Raid Volume", 00:15:18.247 "block_size": 512, 00:15:18.247 "num_blocks": 131072, 00:15:18.247 "uuid": "7485fd79-5d84-44e1-87cc-a9c4e068dbda", 00:15:18.247 "assigned_rate_limits": { 00:15:18.247 "rw_ios_per_sec": 0, 00:15:18.247 "rw_mbytes_per_sec": 0, 00:15:18.247 "r_mbytes_per_sec": 0, 00:15:18.247 "w_mbytes_per_sec": 0 00:15:18.247 }, 00:15:18.247 "claimed": false, 00:15:18.247 "zoned": false, 00:15:18.247 "supported_io_types": { 00:15:18.247 "read": true, 00:15:18.247 "write": true, 00:15:18.247 "unmap": false, 00:15:18.247 "flush": false, 00:15:18.247 "reset": true, 00:15:18.247 "nvme_admin": false, 00:15:18.247 "nvme_io": false, 00:15:18.247 "nvme_io_md": false, 00:15:18.247 "write_zeroes": true, 00:15:18.247 "zcopy": false, 00:15:18.247 "get_zone_info": false, 00:15:18.247 "zone_management": false, 00:15:18.247 "zone_append": false, 00:15:18.247 "compare": false, 00:15:18.247 "compare_and_write": false, 00:15:18.247 "abort": false, 00:15:18.247 "seek_hole": false, 00:15:18.247 "seek_data": false, 00:15:18.247 "copy": false, 00:15:18.247 "nvme_iov_md": false 00:15:18.247 }, 00:15:18.247 "driver_specific": { 00:15:18.247 "raid": { 00:15:18.247 "uuid": "7485fd79-5d84-44e1-87cc-a9c4e068dbda", 00:15:18.247 "strip_size_kb": 64, 00:15:18.247 "state": "online", 00:15:18.247 "raid_level": "raid5f", 00:15:18.247 "superblock": false, 00:15:18.247 "num_base_bdevs": 3, 00:15:18.247 "num_base_bdevs_discovered": 3, 00:15:18.247 "num_base_bdevs_operational": 3, 00:15:18.247 "base_bdevs_list": [ 00:15:18.247 { 00:15:18.247 "name": "NewBaseBdev", 00:15:18.247 "uuid": "6c63ad27-21d2-4deb-befb-31a9296acf5f", 00:15:18.247 "is_configured": true, 00:15:18.247 "data_offset": 0, 00:15:18.247 "data_size": 65536 00:15:18.247 }, 00:15:18.247 { 00:15:18.247 "name": "BaseBdev2", 00:15:18.247 "uuid": "c78affab-58bf-4ca5-8ee6-4f38edc26508", 00:15:18.247 "is_configured": true, 00:15:18.247 "data_offset": 0, 00:15:18.247 "data_size": 65536 00:15:18.247 }, 00:15:18.247 { 00:15:18.247 "name": "BaseBdev3", 00:15:18.247 "uuid": "746f02d3-5fc8-424b-93df-a77155757606", 00:15:18.247 "is_configured": true, 00:15:18.247 "data_offset": 0, 00:15:18.247 "data_size": 65536 00:15:18.247 } 00:15:18.247 ] 00:15:18.247 } 00:15:18.247 } 00:15:18.247 }' 00:15:18.247 07:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:18.247 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:18.247 BaseBdev2 00:15:18.247 BaseBdev3' 00:15:18.247 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.247 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:18.247 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.247 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.247 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.248 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.508 [2024-11-29 07:47:08.224322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:18.508 [2024-11-29 07:47:08.224351] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.508 [2024-11-29 07:47:08.224423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.508 [2024-11-29 07:47:08.224698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.508 [2024-11-29 07:47:08.224715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79596 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79596 ']' 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79596 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79596 00:15:18.508 killing process with pid 79596 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79596' 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79596 00:15:18.508 [2024-11-29 07:47:08.271438] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.508 07:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79596 00:15:18.768 [2024-11-29 07:47:08.549764] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:19.714 07:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:19.714 00:15:19.714 real 0m10.519s 00:15:19.714 user 0m16.868s 00:15:19.714 sys 0m1.827s 00:15:19.714 07:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:19.714 07:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.714 ************************************ 00:15:19.714 END TEST raid5f_state_function_test 00:15:19.714 ************************************ 00:15:19.974 07:47:09 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:19.974 07:47:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:19.975 07:47:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.975 07:47:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:19.975 ************************************ 00:15:19.975 START TEST raid5f_state_function_test_sb 00:15:19.975 ************************************ 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80212 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:19.975 Process raid pid: 80212 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80212' 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80212 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80212 ']' 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.975 07:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.975 [2024-11-29 07:47:09.783730] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:19.975 [2024-11-29 07:47:09.783850] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.235 [2024-11-29 07:47:09.959142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.235 [2024-11-29 07:47:10.061255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.494 [2024-11-29 07:47:10.256988] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.494 [2024-11-29 07:47:10.257022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.753 07:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.753 07:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:20.753 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:20.753 07:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.753 07:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.753 [2024-11-29 07:47:10.614375] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.753 [2024-11-29 07:47:10.614429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.753 [2024-11-29 07:47:10.614439] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.753 [2024-11-29 07:47:10.614465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.753 [2024-11-29 07:47:10.614476] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:20.753 [2024-11-29 07:47:10.614486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:20.753 07:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.753 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:20.753 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.754 "name": "Existed_Raid", 00:15:20.754 "uuid": "04e7f8e0-0b72-4afa-84e3-54cae4a7dac5", 00:15:20.754 "strip_size_kb": 64, 00:15:20.754 "state": "configuring", 00:15:20.754 "raid_level": "raid5f", 00:15:20.754 "superblock": true, 00:15:20.754 "num_base_bdevs": 3, 00:15:20.754 "num_base_bdevs_discovered": 0, 00:15:20.754 "num_base_bdevs_operational": 3, 00:15:20.754 "base_bdevs_list": [ 00:15:20.754 { 00:15:20.754 "name": "BaseBdev1", 00:15:20.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.754 "is_configured": false, 00:15:20.754 "data_offset": 0, 00:15:20.754 "data_size": 0 00:15:20.754 }, 00:15:20.754 { 00:15:20.754 "name": "BaseBdev2", 00:15:20.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.754 "is_configured": false, 00:15:20.754 "data_offset": 0, 00:15:20.754 "data_size": 0 00:15:20.754 }, 00:15:20.754 { 00:15:20.754 "name": "BaseBdev3", 00:15:20.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.754 "is_configured": false, 00:15:20.754 "data_offset": 0, 00:15:20.754 "data_size": 0 00:15:20.754 } 00:15:20.754 ] 00:15:20.754 }' 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.754 07:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.324 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:21.324 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.324 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.324 [2024-11-29 07:47:11.049537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.324 [2024-11-29 07:47:11.049572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.325 [2024-11-29 07:47:11.061531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.325 [2024-11-29 07:47:11.061573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.325 [2024-11-29 07:47:11.061582] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.325 [2024-11-29 07:47:11.061591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.325 [2024-11-29 07:47:11.061597] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.325 [2024-11-29 07:47:11.061605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.325 [2024-11-29 07:47:11.107434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.325 BaseBdev1 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.325 [ 00:15:21.325 { 00:15:21.325 "name": "BaseBdev1", 00:15:21.325 "aliases": [ 00:15:21.325 "e303aab2-4b4a-4d66-8703-a6d7edb0f782" 00:15:21.325 ], 00:15:21.325 "product_name": "Malloc disk", 00:15:21.325 "block_size": 512, 00:15:21.325 "num_blocks": 65536, 00:15:21.325 "uuid": "e303aab2-4b4a-4d66-8703-a6d7edb0f782", 00:15:21.325 "assigned_rate_limits": { 00:15:21.325 "rw_ios_per_sec": 0, 00:15:21.325 "rw_mbytes_per_sec": 0, 00:15:21.325 "r_mbytes_per_sec": 0, 00:15:21.325 "w_mbytes_per_sec": 0 00:15:21.325 }, 00:15:21.325 "claimed": true, 00:15:21.325 "claim_type": "exclusive_write", 00:15:21.325 "zoned": false, 00:15:21.325 "supported_io_types": { 00:15:21.325 "read": true, 00:15:21.325 "write": true, 00:15:21.325 "unmap": true, 00:15:21.325 "flush": true, 00:15:21.325 "reset": true, 00:15:21.325 "nvme_admin": false, 00:15:21.325 "nvme_io": false, 00:15:21.325 "nvme_io_md": false, 00:15:21.325 "write_zeroes": true, 00:15:21.325 "zcopy": true, 00:15:21.325 "get_zone_info": false, 00:15:21.325 "zone_management": false, 00:15:21.325 "zone_append": false, 00:15:21.325 "compare": false, 00:15:21.325 "compare_and_write": false, 00:15:21.325 "abort": true, 00:15:21.325 "seek_hole": false, 00:15:21.325 "seek_data": false, 00:15:21.325 "copy": true, 00:15:21.325 "nvme_iov_md": false 00:15:21.325 }, 00:15:21.325 "memory_domains": [ 00:15:21.325 { 00:15:21.325 "dma_device_id": "system", 00:15:21.325 "dma_device_type": 1 00:15:21.325 }, 00:15:21.325 { 00:15:21.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.325 "dma_device_type": 2 00:15:21.325 } 00:15:21.325 ], 00:15:21.325 "driver_specific": {} 00:15:21.325 } 00:15:21.325 ] 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.325 "name": "Existed_Raid", 00:15:21.325 "uuid": "4ff73d44-a1f8-4ea1-bd96-b0dc38c92e29", 00:15:21.325 "strip_size_kb": 64, 00:15:21.325 "state": "configuring", 00:15:21.325 "raid_level": "raid5f", 00:15:21.325 "superblock": true, 00:15:21.325 "num_base_bdevs": 3, 00:15:21.325 "num_base_bdevs_discovered": 1, 00:15:21.325 "num_base_bdevs_operational": 3, 00:15:21.325 "base_bdevs_list": [ 00:15:21.325 { 00:15:21.325 "name": "BaseBdev1", 00:15:21.325 "uuid": "e303aab2-4b4a-4d66-8703-a6d7edb0f782", 00:15:21.325 "is_configured": true, 00:15:21.325 "data_offset": 2048, 00:15:21.325 "data_size": 63488 00:15:21.325 }, 00:15:21.325 { 00:15:21.325 "name": "BaseBdev2", 00:15:21.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.325 "is_configured": false, 00:15:21.325 "data_offset": 0, 00:15:21.325 "data_size": 0 00:15:21.325 }, 00:15:21.325 { 00:15:21.325 "name": "BaseBdev3", 00:15:21.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.325 "is_configured": false, 00:15:21.325 "data_offset": 0, 00:15:21.325 "data_size": 0 00:15:21.325 } 00:15:21.325 ] 00:15:21.325 }' 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.325 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.896 [2024-11-29 07:47:11.578641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.896 [2024-11-29 07:47:11.578692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.896 [2024-11-29 07:47:11.590674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.896 [2024-11-29 07:47:11.592446] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.896 [2024-11-29 07:47:11.592489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.896 [2024-11-29 07:47:11.592498] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.896 [2024-11-29 07:47:11.592507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.896 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.896 "name": "Existed_Raid", 00:15:21.896 "uuid": "9f153d93-b546-4ad7-93f8-c11e202602b2", 00:15:21.896 "strip_size_kb": 64, 00:15:21.896 "state": "configuring", 00:15:21.896 "raid_level": "raid5f", 00:15:21.896 "superblock": true, 00:15:21.896 "num_base_bdevs": 3, 00:15:21.896 "num_base_bdevs_discovered": 1, 00:15:21.896 "num_base_bdevs_operational": 3, 00:15:21.896 "base_bdevs_list": [ 00:15:21.896 { 00:15:21.896 "name": "BaseBdev1", 00:15:21.896 "uuid": "e303aab2-4b4a-4d66-8703-a6d7edb0f782", 00:15:21.896 "is_configured": true, 00:15:21.896 "data_offset": 2048, 00:15:21.896 "data_size": 63488 00:15:21.896 }, 00:15:21.897 { 00:15:21.897 "name": "BaseBdev2", 00:15:21.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.897 "is_configured": false, 00:15:21.897 "data_offset": 0, 00:15:21.897 "data_size": 0 00:15:21.897 }, 00:15:21.897 { 00:15:21.897 "name": "BaseBdev3", 00:15:21.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.897 "is_configured": false, 00:15:21.897 "data_offset": 0, 00:15:21.897 "data_size": 0 00:15:21.897 } 00:15:21.897 ] 00:15:21.897 }' 00:15:21.897 07:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.897 07:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.158 [2024-11-29 07:47:12.083156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.158 BaseBdev2 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.158 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.418 [ 00:15:22.418 { 00:15:22.418 "name": "BaseBdev2", 00:15:22.418 "aliases": [ 00:15:22.418 "b449de85-d23b-43b2-8bf0-d0ea5fff67d1" 00:15:22.418 ], 00:15:22.418 "product_name": "Malloc disk", 00:15:22.418 "block_size": 512, 00:15:22.418 "num_blocks": 65536, 00:15:22.418 "uuid": "b449de85-d23b-43b2-8bf0-d0ea5fff67d1", 00:15:22.419 "assigned_rate_limits": { 00:15:22.419 "rw_ios_per_sec": 0, 00:15:22.419 "rw_mbytes_per_sec": 0, 00:15:22.419 "r_mbytes_per_sec": 0, 00:15:22.419 "w_mbytes_per_sec": 0 00:15:22.419 }, 00:15:22.419 "claimed": true, 00:15:22.419 "claim_type": "exclusive_write", 00:15:22.419 "zoned": false, 00:15:22.419 "supported_io_types": { 00:15:22.419 "read": true, 00:15:22.419 "write": true, 00:15:22.419 "unmap": true, 00:15:22.419 "flush": true, 00:15:22.419 "reset": true, 00:15:22.419 "nvme_admin": false, 00:15:22.419 "nvme_io": false, 00:15:22.419 "nvme_io_md": false, 00:15:22.419 "write_zeroes": true, 00:15:22.419 "zcopy": true, 00:15:22.419 "get_zone_info": false, 00:15:22.419 "zone_management": false, 00:15:22.419 "zone_append": false, 00:15:22.419 "compare": false, 00:15:22.419 "compare_and_write": false, 00:15:22.419 "abort": true, 00:15:22.419 "seek_hole": false, 00:15:22.419 "seek_data": false, 00:15:22.419 "copy": true, 00:15:22.419 "nvme_iov_md": false 00:15:22.419 }, 00:15:22.419 "memory_domains": [ 00:15:22.419 { 00:15:22.419 "dma_device_id": "system", 00:15:22.419 "dma_device_type": 1 00:15:22.419 }, 00:15:22.419 { 00:15:22.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.419 "dma_device_type": 2 00:15:22.419 } 00:15:22.419 ], 00:15:22.419 "driver_specific": {} 00:15:22.419 } 00:15:22.419 ] 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.419 "name": "Existed_Raid", 00:15:22.419 "uuid": "9f153d93-b546-4ad7-93f8-c11e202602b2", 00:15:22.419 "strip_size_kb": 64, 00:15:22.419 "state": "configuring", 00:15:22.419 "raid_level": "raid5f", 00:15:22.419 "superblock": true, 00:15:22.419 "num_base_bdevs": 3, 00:15:22.419 "num_base_bdevs_discovered": 2, 00:15:22.419 "num_base_bdevs_operational": 3, 00:15:22.419 "base_bdevs_list": [ 00:15:22.419 { 00:15:22.419 "name": "BaseBdev1", 00:15:22.419 "uuid": "e303aab2-4b4a-4d66-8703-a6d7edb0f782", 00:15:22.419 "is_configured": true, 00:15:22.419 "data_offset": 2048, 00:15:22.419 "data_size": 63488 00:15:22.419 }, 00:15:22.419 { 00:15:22.419 "name": "BaseBdev2", 00:15:22.419 "uuid": "b449de85-d23b-43b2-8bf0-d0ea5fff67d1", 00:15:22.419 "is_configured": true, 00:15:22.419 "data_offset": 2048, 00:15:22.419 "data_size": 63488 00:15:22.419 }, 00:15:22.419 { 00:15:22.419 "name": "BaseBdev3", 00:15:22.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.419 "is_configured": false, 00:15:22.419 "data_offset": 0, 00:15:22.419 "data_size": 0 00:15:22.419 } 00:15:22.419 ] 00:15:22.419 }' 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.419 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.679 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:22.679 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.679 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.939 [2024-11-29 07:47:12.656603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.939 [2024-11-29 07:47:12.656872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:22.939 [2024-11-29 07:47:12.656892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:22.939 [2024-11-29 07:47:12.657184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:22.939 BaseBdev3 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.939 [2024-11-29 07:47:12.662691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:22.939 [2024-11-29 07:47:12.662715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:22.939 [2024-11-29 07:47:12.662873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.939 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.939 [ 00:15:22.939 { 00:15:22.939 "name": "BaseBdev3", 00:15:22.939 "aliases": [ 00:15:22.939 "6ccd9958-15cc-4be6-9030-e76b469c217b" 00:15:22.939 ], 00:15:22.939 "product_name": "Malloc disk", 00:15:22.939 "block_size": 512, 00:15:22.939 "num_blocks": 65536, 00:15:22.939 "uuid": "6ccd9958-15cc-4be6-9030-e76b469c217b", 00:15:22.939 "assigned_rate_limits": { 00:15:22.939 "rw_ios_per_sec": 0, 00:15:22.939 "rw_mbytes_per_sec": 0, 00:15:22.939 "r_mbytes_per_sec": 0, 00:15:22.939 "w_mbytes_per_sec": 0 00:15:22.939 }, 00:15:22.939 "claimed": true, 00:15:22.939 "claim_type": "exclusive_write", 00:15:22.939 "zoned": false, 00:15:22.940 "supported_io_types": { 00:15:22.940 "read": true, 00:15:22.940 "write": true, 00:15:22.940 "unmap": true, 00:15:22.940 "flush": true, 00:15:22.940 "reset": true, 00:15:22.940 "nvme_admin": false, 00:15:22.940 "nvme_io": false, 00:15:22.940 "nvme_io_md": false, 00:15:22.940 "write_zeroes": true, 00:15:22.940 "zcopy": true, 00:15:22.940 "get_zone_info": false, 00:15:22.940 "zone_management": false, 00:15:22.940 "zone_append": false, 00:15:22.940 "compare": false, 00:15:22.940 "compare_and_write": false, 00:15:22.940 "abort": true, 00:15:22.940 "seek_hole": false, 00:15:22.940 "seek_data": false, 00:15:22.940 "copy": true, 00:15:22.940 "nvme_iov_md": false 00:15:22.940 }, 00:15:22.940 "memory_domains": [ 00:15:22.940 { 00:15:22.940 "dma_device_id": "system", 00:15:22.940 "dma_device_type": 1 00:15:22.940 }, 00:15:22.940 { 00:15:22.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.940 "dma_device_type": 2 00:15:22.940 } 00:15:22.940 ], 00:15:22.940 "driver_specific": {} 00:15:22.940 } 00:15:22.940 ] 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.940 "name": "Existed_Raid", 00:15:22.940 "uuid": "9f153d93-b546-4ad7-93f8-c11e202602b2", 00:15:22.940 "strip_size_kb": 64, 00:15:22.940 "state": "online", 00:15:22.940 "raid_level": "raid5f", 00:15:22.940 "superblock": true, 00:15:22.940 "num_base_bdevs": 3, 00:15:22.940 "num_base_bdevs_discovered": 3, 00:15:22.940 "num_base_bdevs_operational": 3, 00:15:22.940 "base_bdevs_list": [ 00:15:22.940 { 00:15:22.940 "name": "BaseBdev1", 00:15:22.940 "uuid": "e303aab2-4b4a-4d66-8703-a6d7edb0f782", 00:15:22.940 "is_configured": true, 00:15:22.940 "data_offset": 2048, 00:15:22.940 "data_size": 63488 00:15:22.940 }, 00:15:22.940 { 00:15:22.940 "name": "BaseBdev2", 00:15:22.940 "uuid": "b449de85-d23b-43b2-8bf0-d0ea5fff67d1", 00:15:22.940 "is_configured": true, 00:15:22.940 "data_offset": 2048, 00:15:22.940 "data_size": 63488 00:15:22.940 }, 00:15:22.940 { 00:15:22.940 "name": "BaseBdev3", 00:15:22.940 "uuid": "6ccd9958-15cc-4be6-9030-e76b469c217b", 00:15:22.940 "is_configured": true, 00:15:22.940 "data_offset": 2048, 00:15:22.940 "data_size": 63488 00:15:22.940 } 00:15:22.940 ] 00:15:22.940 }' 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.940 07:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.202 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:23.202 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:23.202 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:23.202 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:23.202 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:23.202 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:23.202 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:23.202 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:23.202 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.202 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.202 [2024-11-29 07:47:13.132422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.478 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.478 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.478 "name": "Existed_Raid", 00:15:23.478 "aliases": [ 00:15:23.478 "9f153d93-b546-4ad7-93f8-c11e202602b2" 00:15:23.478 ], 00:15:23.478 "product_name": "Raid Volume", 00:15:23.478 "block_size": 512, 00:15:23.478 "num_blocks": 126976, 00:15:23.478 "uuid": "9f153d93-b546-4ad7-93f8-c11e202602b2", 00:15:23.478 "assigned_rate_limits": { 00:15:23.478 "rw_ios_per_sec": 0, 00:15:23.478 "rw_mbytes_per_sec": 0, 00:15:23.478 "r_mbytes_per_sec": 0, 00:15:23.478 "w_mbytes_per_sec": 0 00:15:23.478 }, 00:15:23.478 "claimed": false, 00:15:23.478 "zoned": false, 00:15:23.478 "supported_io_types": { 00:15:23.478 "read": true, 00:15:23.478 "write": true, 00:15:23.478 "unmap": false, 00:15:23.478 "flush": false, 00:15:23.478 "reset": true, 00:15:23.478 "nvme_admin": false, 00:15:23.478 "nvme_io": false, 00:15:23.478 "nvme_io_md": false, 00:15:23.478 "write_zeroes": true, 00:15:23.478 "zcopy": false, 00:15:23.478 "get_zone_info": false, 00:15:23.478 "zone_management": false, 00:15:23.479 "zone_append": false, 00:15:23.479 "compare": false, 00:15:23.479 "compare_and_write": false, 00:15:23.479 "abort": false, 00:15:23.479 "seek_hole": false, 00:15:23.479 "seek_data": false, 00:15:23.479 "copy": false, 00:15:23.479 "nvme_iov_md": false 00:15:23.479 }, 00:15:23.479 "driver_specific": { 00:15:23.479 "raid": { 00:15:23.479 "uuid": "9f153d93-b546-4ad7-93f8-c11e202602b2", 00:15:23.479 "strip_size_kb": 64, 00:15:23.479 "state": "online", 00:15:23.479 "raid_level": "raid5f", 00:15:23.479 "superblock": true, 00:15:23.479 "num_base_bdevs": 3, 00:15:23.479 "num_base_bdevs_discovered": 3, 00:15:23.479 "num_base_bdevs_operational": 3, 00:15:23.479 "base_bdevs_list": [ 00:15:23.479 { 00:15:23.479 "name": "BaseBdev1", 00:15:23.479 "uuid": "e303aab2-4b4a-4d66-8703-a6d7edb0f782", 00:15:23.479 "is_configured": true, 00:15:23.479 "data_offset": 2048, 00:15:23.479 "data_size": 63488 00:15:23.479 }, 00:15:23.479 { 00:15:23.479 "name": "BaseBdev2", 00:15:23.479 "uuid": "b449de85-d23b-43b2-8bf0-d0ea5fff67d1", 00:15:23.479 "is_configured": true, 00:15:23.479 "data_offset": 2048, 00:15:23.479 "data_size": 63488 00:15:23.479 }, 00:15:23.479 { 00:15:23.479 "name": "BaseBdev3", 00:15:23.479 "uuid": "6ccd9958-15cc-4be6-9030-e76b469c217b", 00:15:23.479 "is_configured": true, 00:15:23.479 "data_offset": 2048, 00:15:23.479 "data_size": 63488 00:15:23.479 } 00:15:23.479 ] 00:15:23.479 } 00:15:23.479 } 00:15:23.479 }' 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:23.479 BaseBdev2 00:15:23.479 BaseBdev3' 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.479 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.479 [2024-11-29 07:47:13.371869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.751 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.751 "name": "Existed_Raid", 00:15:23.752 "uuid": "9f153d93-b546-4ad7-93f8-c11e202602b2", 00:15:23.752 "strip_size_kb": 64, 00:15:23.752 "state": "online", 00:15:23.752 "raid_level": "raid5f", 00:15:23.752 "superblock": true, 00:15:23.752 "num_base_bdevs": 3, 00:15:23.752 "num_base_bdevs_discovered": 2, 00:15:23.752 "num_base_bdevs_operational": 2, 00:15:23.752 "base_bdevs_list": [ 00:15:23.752 { 00:15:23.752 "name": null, 00:15:23.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.752 "is_configured": false, 00:15:23.752 "data_offset": 0, 00:15:23.752 "data_size": 63488 00:15:23.752 }, 00:15:23.752 { 00:15:23.752 "name": "BaseBdev2", 00:15:23.752 "uuid": "b449de85-d23b-43b2-8bf0-d0ea5fff67d1", 00:15:23.752 "is_configured": true, 00:15:23.752 "data_offset": 2048, 00:15:23.752 "data_size": 63488 00:15:23.752 }, 00:15:23.752 { 00:15:23.752 "name": "BaseBdev3", 00:15:23.752 "uuid": "6ccd9958-15cc-4be6-9030-e76b469c217b", 00:15:23.752 "is_configured": true, 00:15:23.752 "data_offset": 2048, 00:15:23.752 "data_size": 63488 00:15:23.752 } 00:15:23.752 ] 00:15:23.752 }' 00:15:23.752 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.752 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.012 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:24.012 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.012 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:24.012 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.012 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.012 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.012 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.012 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:24.012 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.012 07:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:24.012 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.012 07:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.012 [2024-11-29 07:47:13.944001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:24.012 [2024-11-29 07:47:13.944163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.272 [2024-11-29 07:47:14.034778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.272 [2024-11-29 07:47:14.094682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:24.272 [2024-11-29 07:47:14.094731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:24.272 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.532 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:24.532 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:24.532 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.533 BaseBdev2 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.533 [ 00:15:24.533 { 00:15:24.533 "name": "BaseBdev2", 00:15:24.533 "aliases": [ 00:15:24.533 "0082e39f-d130-4732-b0c3-8a90eea3a1dd" 00:15:24.533 ], 00:15:24.533 "product_name": "Malloc disk", 00:15:24.533 "block_size": 512, 00:15:24.533 "num_blocks": 65536, 00:15:24.533 "uuid": "0082e39f-d130-4732-b0c3-8a90eea3a1dd", 00:15:24.533 "assigned_rate_limits": { 00:15:24.533 "rw_ios_per_sec": 0, 00:15:24.533 "rw_mbytes_per_sec": 0, 00:15:24.533 "r_mbytes_per_sec": 0, 00:15:24.533 "w_mbytes_per_sec": 0 00:15:24.533 }, 00:15:24.533 "claimed": false, 00:15:24.533 "zoned": false, 00:15:24.533 "supported_io_types": { 00:15:24.533 "read": true, 00:15:24.533 "write": true, 00:15:24.533 "unmap": true, 00:15:24.533 "flush": true, 00:15:24.533 "reset": true, 00:15:24.533 "nvme_admin": false, 00:15:24.533 "nvme_io": false, 00:15:24.533 "nvme_io_md": false, 00:15:24.533 "write_zeroes": true, 00:15:24.533 "zcopy": true, 00:15:24.533 "get_zone_info": false, 00:15:24.533 "zone_management": false, 00:15:24.533 "zone_append": false, 00:15:24.533 "compare": false, 00:15:24.533 "compare_and_write": false, 00:15:24.533 "abort": true, 00:15:24.533 "seek_hole": false, 00:15:24.533 "seek_data": false, 00:15:24.533 "copy": true, 00:15:24.533 "nvme_iov_md": false 00:15:24.533 }, 00:15:24.533 "memory_domains": [ 00:15:24.533 { 00:15:24.533 "dma_device_id": "system", 00:15:24.533 "dma_device_type": 1 00:15:24.533 }, 00:15:24.533 { 00:15:24.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.533 "dma_device_type": 2 00:15:24.533 } 00:15:24.533 ], 00:15:24.533 "driver_specific": {} 00:15:24.533 } 00:15:24.533 ] 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.533 BaseBdev3 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.533 [ 00:15:24.533 { 00:15:24.533 "name": "BaseBdev3", 00:15:24.533 "aliases": [ 00:15:24.533 "41e00fc4-d7d3-4378-9c81-a55a6628ed06" 00:15:24.533 ], 00:15:24.533 "product_name": "Malloc disk", 00:15:24.533 "block_size": 512, 00:15:24.533 "num_blocks": 65536, 00:15:24.533 "uuid": "41e00fc4-d7d3-4378-9c81-a55a6628ed06", 00:15:24.533 "assigned_rate_limits": { 00:15:24.533 "rw_ios_per_sec": 0, 00:15:24.533 "rw_mbytes_per_sec": 0, 00:15:24.533 "r_mbytes_per_sec": 0, 00:15:24.533 "w_mbytes_per_sec": 0 00:15:24.533 }, 00:15:24.533 "claimed": false, 00:15:24.533 "zoned": false, 00:15:24.533 "supported_io_types": { 00:15:24.533 "read": true, 00:15:24.533 "write": true, 00:15:24.533 "unmap": true, 00:15:24.533 "flush": true, 00:15:24.533 "reset": true, 00:15:24.533 "nvme_admin": false, 00:15:24.533 "nvme_io": false, 00:15:24.533 "nvme_io_md": false, 00:15:24.533 "write_zeroes": true, 00:15:24.533 "zcopy": true, 00:15:24.533 "get_zone_info": false, 00:15:24.533 "zone_management": false, 00:15:24.533 "zone_append": false, 00:15:24.533 "compare": false, 00:15:24.533 "compare_and_write": false, 00:15:24.533 "abort": true, 00:15:24.533 "seek_hole": false, 00:15:24.533 "seek_data": false, 00:15:24.533 "copy": true, 00:15:24.533 "nvme_iov_md": false 00:15:24.533 }, 00:15:24.533 "memory_domains": [ 00:15:24.533 { 00:15:24.533 "dma_device_id": "system", 00:15:24.533 "dma_device_type": 1 00:15:24.533 }, 00:15:24.533 { 00:15:24.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.533 "dma_device_type": 2 00:15:24.533 } 00:15:24.533 ], 00:15:24.533 "driver_specific": {} 00:15:24.533 } 00:15:24.533 ] 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.533 [2024-11-29 07:47:14.397235] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.533 [2024-11-29 07:47:14.397282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.533 [2024-11-29 07:47:14.397303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.533 [2024-11-29 07:47:14.399003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.533 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.534 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.534 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.534 "name": "Existed_Raid", 00:15:24.534 "uuid": "ad5ff17d-4075-467e-bc84-a65f291cd6af", 00:15:24.534 "strip_size_kb": 64, 00:15:24.534 "state": "configuring", 00:15:24.534 "raid_level": "raid5f", 00:15:24.534 "superblock": true, 00:15:24.534 "num_base_bdevs": 3, 00:15:24.534 "num_base_bdevs_discovered": 2, 00:15:24.534 "num_base_bdevs_operational": 3, 00:15:24.534 "base_bdevs_list": [ 00:15:24.534 { 00:15:24.534 "name": "BaseBdev1", 00:15:24.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.534 "is_configured": false, 00:15:24.534 "data_offset": 0, 00:15:24.534 "data_size": 0 00:15:24.534 }, 00:15:24.534 { 00:15:24.534 "name": "BaseBdev2", 00:15:24.534 "uuid": "0082e39f-d130-4732-b0c3-8a90eea3a1dd", 00:15:24.534 "is_configured": true, 00:15:24.534 "data_offset": 2048, 00:15:24.534 "data_size": 63488 00:15:24.534 }, 00:15:24.534 { 00:15:24.534 "name": "BaseBdev3", 00:15:24.534 "uuid": "41e00fc4-d7d3-4378-9c81-a55a6628ed06", 00:15:24.534 "is_configured": true, 00:15:24.534 "data_offset": 2048, 00:15:24.534 "data_size": 63488 00:15:24.534 } 00:15:24.534 ] 00:15:24.534 }' 00:15:24.534 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.534 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.103 [2024-11-29 07:47:14.820503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.103 "name": "Existed_Raid", 00:15:25.103 "uuid": "ad5ff17d-4075-467e-bc84-a65f291cd6af", 00:15:25.103 "strip_size_kb": 64, 00:15:25.103 "state": "configuring", 00:15:25.103 "raid_level": "raid5f", 00:15:25.103 "superblock": true, 00:15:25.103 "num_base_bdevs": 3, 00:15:25.103 "num_base_bdevs_discovered": 1, 00:15:25.103 "num_base_bdevs_operational": 3, 00:15:25.103 "base_bdevs_list": [ 00:15:25.103 { 00:15:25.103 "name": "BaseBdev1", 00:15:25.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.103 "is_configured": false, 00:15:25.103 "data_offset": 0, 00:15:25.103 "data_size": 0 00:15:25.103 }, 00:15:25.103 { 00:15:25.103 "name": null, 00:15:25.103 "uuid": "0082e39f-d130-4732-b0c3-8a90eea3a1dd", 00:15:25.103 "is_configured": false, 00:15:25.103 "data_offset": 0, 00:15:25.103 "data_size": 63488 00:15:25.103 }, 00:15:25.103 { 00:15:25.103 "name": "BaseBdev3", 00:15:25.103 "uuid": "41e00fc4-d7d3-4378-9c81-a55a6628ed06", 00:15:25.103 "is_configured": true, 00:15:25.103 "data_offset": 2048, 00:15:25.103 "data_size": 63488 00:15:25.103 } 00:15:25.103 ] 00:15:25.103 }' 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.103 07:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.362 [2024-11-29 07:47:15.234911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.362 BaseBdev1 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.362 [ 00:15:25.362 { 00:15:25.362 "name": "BaseBdev1", 00:15:25.362 "aliases": [ 00:15:25.362 "cf5f88c7-5723-43af-a3e5-f124f51c8db4" 00:15:25.362 ], 00:15:25.362 "product_name": "Malloc disk", 00:15:25.362 "block_size": 512, 00:15:25.362 "num_blocks": 65536, 00:15:25.362 "uuid": "cf5f88c7-5723-43af-a3e5-f124f51c8db4", 00:15:25.362 "assigned_rate_limits": { 00:15:25.362 "rw_ios_per_sec": 0, 00:15:25.362 "rw_mbytes_per_sec": 0, 00:15:25.362 "r_mbytes_per_sec": 0, 00:15:25.362 "w_mbytes_per_sec": 0 00:15:25.362 }, 00:15:25.362 "claimed": true, 00:15:25.362 "claim_type": "exclusive_write", 00:15:25.362 "zoned": false, 00:15:25.362 "supported_io_types": { 00:15:25.362 "read": true, 00:15:25.362 "write": true, 00:15:25.362 "unmap": true, 00:15:25.362 "flush": true, 00:15:25.362 "reset": true, 00:15:25.362 "nvme_admin": false, 00:15:25.362 "nvme_io": false, 00:15:25.362 "nvme_io_md": false, 00:15:25.362 "write_zeroes": true, 00:15:25.362 "zcopy": true, 00:15:25.362 "get_zone_info": false, 00:15:25.362 "zone_management": false, 00:15:25.362 "zone_append": false, 00:15:25.362 "compare": false, 00:15:25.362 "compare_and_write": false, 00:15:25.362 "abort": true, 00:15:25.362 "seek_hole": false, 00:15:25.362 "seek_data": false, 00:15:25.362 "copy": true, 00:15:25.362 "nvme_iov_md": false 00:15:25.362 }, 00:15:25.362 "memory_domains": [ 00:15:25.362 { 00:15:25.362 "dma_device_id": "system", 00:15:25.362 "dma_device_type": 1 00:15:25.362 }, 00:15:25.362 { 00:15:25.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.362 "dma_device_type": 2 00:15:25.362 } 00:15:25.362 ], 00:15:25.362 "driver_specific": {} 00:15:25.362 } 00:15:25.362 ] 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.362 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.363 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.363 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.363 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.363 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.363 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.363 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.363 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.363 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.363 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.363 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.363 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.363 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.621 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.621 "name": "Existed_Raid", 00:15:25.621 "uuid": "ad5ff17d-4075-467e-bc84-a65f291cd6af", 00:15:25.621 "strip_size_kb": 64, 00:15:25.621 "state": "configuring", 00:15:25.621 "raid_level": "raid5f", 00:15:25.621 "superblock": true, 00:15:25.621 "num_base_bdevs": 3, 00:15:25.621 "num_base_bdevs_discovered": 2, 00:15:25.621 "num_base_bdevs_operational": 3, 00:15:25.621 "base_bdevs_list": [ 00:15:25.621 { 00:15:25.621 "name": "BaseBdev1", 00:15:25.621 "uuid": "cf5f88c7-5723-43af-a3e5-f124f51c8db4", 00:15:25.621 "is_configured": true, 00:15:25.621 "data_offset": 2048, 00:15:25.621 "data_size": 63488 00:15:25.621 }, 00:15:25.621 { 00:15:25.621 "name": null, 00:15:25.621 "uuid": "0082e39f-d130-4732-b0c3-8a90eea3a1dd", 00:15:25.621 "is_configured": false, 00:15:25.621 "data_offset": 0, 00:15:25.621 "data_size": 63488 00:15:25.621 }, 00:15:25.621 { 00:15:25.621 "name": "BaseBdev3", 00:15:25.621 "uuid": "41e00fc4-d7d3-4378-9c81-a55a6628ed06", 00:15:25.621 "is_configured": true, 00:15:25.621 "data_offset": 2048, 00:15:25.621 "data_size": 63488 00:15:25.621 } 00:15:25.621 ] 00:15:25.622 }' 00:15:25.622 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.622 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.882 [2024-11-29 07:47:15.726165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.882 "name": "Existed_Raid", 00:15:25.882 "uuid": "ad5ff17d-4075-467e-bc84-a65f291cd6af", 00:15:25.882 "strip_size_kb": 64, 00:15:25.882 "state": "configuring", 00:15:25.882 "raid_level": "raid5f", 00:15:25.882 "superblock": true, 00:15:25.882 "num_base_bdevs": 3, 00:15:25.882 "num_base_bdevs_discovered": 1, 00:15:25.882 "num_base_bdevs_operational": 3, 00:15:25.882 "base_bdevs_list": [ 00:15:25.882 { 00:15:25.882 "name": "BaseBdev1", 00:15:25.882 "uuid": "cf5f88c7-5723-43af-a3e5-f124f51c8db4", 00:15:25.882 "is_configured": true, 00:15:25.882 "data_offset": 2048, 00:15:25.882 "data_size": 63488 00:15:25.882 }, 00:15:25.882 { 00:15:25.882 "name": null, 00:15:25.882 "uuid": "0082e39f-d130-4732-b0c3-8a90eea3a1dd", 00:15:25.882 "is_configured": false, 00:15:25.882 "data_offset": 0, 00:15:25.882 "data_size": 63488 00:15:25.882 }, 00:15:25.882 { 00:15:25.882 "name": null, 00:15:25.882 "uuid": "41e00fc4-d7d3-4378-9c81-a55a6628ed06", 00:15:25.882 "is_configured": false, 00:15:25.882 "data_offset": 0, 00:15:25.882 "data_size": 63488 00:15:25.882 } 00:15:25.882 ] 00:15:25.882 }' 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.882 07:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.452 [2024-11-29 07:47:16.161428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.452 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.453 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.453 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.453 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.453 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.453 "name": "Existed_Raid", 00:15:26.453 "uuid": "ad5ff17d-4075-467e-bc84-a65f291cd6af", 00:15:26.453 "strip_size_kb": 64, 00:15:26.453 "state": "configuring", 00:15:26.453 "raid_level": "raid5f", 00:15:26.453 "superblock": true, 00:15:26.453 "num_base_bdevs": 3, 00:15:26.453 "num_base_bdevs_discovered": 2, 00:15:26.453 "num_base_bdevs_operational": 3, 00:15:26.453 "base_bdevs_list": [ 00:15:26.453 { 00:15:26.453 "name": "BaseBdev1", 00:15:26.453 "uuid": "cf5f88c7-5723-43af-a3e5-f124f51c8db4", 00:15:26.453 "is_configured": true, 00:15:26.453 "data_offset": 2048, 00:15:26.453 "data_size": 63488 00:15:26.453 }, 00:15:26.453 { 00:15:26.453 "name": null, 00:15:26.453 "uuid": "0082e39f-d130-4732-b0c3-8a90eea3a1dd", 00:15:26.453 "is_configured": false, 00:15:26.453 "data_offset": 0, 00:15:26.453 "data_size": 63488 00:15:26.453 }, 00:15:26.453 { 00:15:26.453 "name": "BaseBdev3", 00:15:26.453 "uuid": "41e00fc4-d7d3-4378-9c81-a55a6628ed06", 00:15:26.453 "is_configured": true, 00:15:26.453 "data_offset": 2048, 00:15:26.453 "data_size": 63488 00:15:26.453 } 00:15:26.453 ] 00:15:26.453 }' 00:15:26.453 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.453 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.712 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.712 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.712 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.712 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:26.712 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.972 [2024-11-29 07:47:16.688526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.972 "name": "Existed_Raid", 00:15:26.972 "uuid": "ad5ff17d-4075-467e-bc84-a65f291cd6af", 00:15:26.972 "strip_size_kb": 64, 00:15:26.972 "state": "configuring", 00:15:26.972 "raid_level": "raid5f", 00:15:26.972 "superblock": true, 00:15:26.972 "num_base_bdevs": 3, 00:15:26.972 "num_base_bdevs_discovered": 1, 00:15:26.972 "num_base_bdevs_operational": 3, 00:15:26.972 "base_bdevs_list": [ 00:15:26.972 { 00:15:26.972 "name": null, 00:15:26.972 "uuid": "cf5f88c7-5723-43af-a3e5-f124f51c8db4", 00:15:26.972 "is_configured": false, 00:15:26.972 "data_offset": 0, 00:15:26.972 "data_size": 63488 00:15:26.972 }, 00:15:26.972 { 00:15:26.972 "name": null, 00:15:26.972 "uuid": "0082e39f-d130-4732-b0c3-8a90eea3a1dd", 00:15:26.972 "is_configured": false, 00:15:26.972 "data_offset": 0, 00:15:26.972 "data_size": 63488 00:15:26.972 }, 00:15:26.972 { 00:15:26.972 "name": "BaseBdev3", 00:15:26.972 "uuid": "41e00fc4-d7d3-4378-9c81-a55a6628ed06", 00:15:26.972 "is_configured": true, 00:15:26.972 "data_offset": 2048, 00:15:26.972 "data_size": 63488 00:15:26.972 } 00:15:26.972 ] 00:15:26.972 }' 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.972 07:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.542 [2024-11-29 07:47:17.250611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.542 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.542 "name": "Existed_Raid", 00:15:27.542 "uuid": "ad5ff17d-4075-467e-bc84-a65f291cd6af", 00:15:27.542 "strip_size_kb": 64, 00:15:27.542 "state": "configuring", 00:15:27.542 "raid_level": "raid5f", 00:15:27.542 "superblock": true, 00:15:27.542 "num_base_bdevs": 3, 00:15:27.542 "num_base_bdevs_discovered": 2, 00:15:27.542 "num_base_bdevs_operational": 3, 00:15:27.542 "base_bdevs_list": [ 00:15:27.542 { 00:15:27.542 "name": null, 00:15:27.542 "uuid": "cf5f88c7-5723-43af-a3e5-f124f51c8db4", 00:15:27.542 "is_configured": false, 00:15:27.543 "data_offset": 0, 00:15:27.543 "data_size": 63488 00:15:27.543 }, 00:15:27.543 { 00:15:27.543 "name": "BaseBdev2", 00:15:27.543 "uuid": "0082e39f-d130-4732-b0c3-8a90eea3a1dd", 00:15:27.543 "is_configured": true, 00:15:27.543 "data_offset": 2048, 00:15:27.543 "data_size": 63488 00:15:27.543 }, 00:15:27.543 { 00:15:27.543 "name": "BaseBdev3", 00:15:27.543 "uuid": "41e00fc4-d7d3-4378-9c81-a55a6628ed06", 00:15:27.543 "is_configured": true, 00:15:27.543 "data_offset": 2048, 00:15:27.543 "data_size": 63488 00:15:27.543 } 00:15:27.543 ] 00:15:27.543 }' 00:15:27.543 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.543 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cf5f88c7-5723-43af-a3e5-f124f51c8db4 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.803 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.064 [2024-11-29 07:47:17.770210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:28.064 [2024-11-29 07:47:17.770438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:28.064 [2024-11-29 07:47:17.770455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:28.064 [2024-11-29 07:47:17.770691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:28.064 NewBaseBdev 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.064 [2024-11-29 07:47:17.776274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:28.064 [2024-11-29 07:47:17.776299] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:28.064 [2024-11-29 07:47:17.776459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.064 [ 00:15:28.064 { 00:15:28.064 "name": "NewBaseBdev", 00:15:28.064 "aliases": [ 00:15:28.064 "cf5f88c7-5723-43af-a3e5-f124f51c8db4" 00:15:28.064 ], 00:15:28.064 "product_name": "Malloc disk", 00:15:28.064 "block_size": 512, 00:15:28.064 "num_blocks": 65536, 00:15:28.064 "uuid": "cf5f88c7-5723-43af-a3e5-f124f51c8db4", 00:15:28.064 "assigned_rate_limits": { 00:15:28.064 "rw_ios_per_sec": 0, 00:15:28.064 "rw_mbytes_per_sec": 0, 00:15:28.064 "r_mbytes_per_sec": 0, 00:15:28.064 "w_mbytes_per_sec": 0 00:15:28.064 }, 00:15:28.064 "claimed": true, 00:15:28.064 "claim_type": "exclusive_write", 00:15:28.064 "zoned": false, 00:15:28.064 "supported_io_types": { 00:15:28.064 "read": true, 00:15:28.064 "write": true, 00:15:28.064 "unmap": true, 00:15:28.064 "flush": true, 00:15:28.064 "reset": true, 00:15:28.064 "nvme_admin": false, 00:15:28.064 "nvme_io": false, 00:15:28.064 "nvme_io_md": false, 00:15:28.064 "write_zeroes": true, 00:15:28.064 "zcopy": true, 00:15:28.064 "get_zone_info": false, 00:15:28.064 "zone_management": false, 00:15:28.064 "zone_append": false, 00:15:28.064 "compare": false, 00:15:28.064 "compare_and_write": false, 00:15:28.064 "abort": true, 00:15:28.064 "seek_hole": false, 00:15:28.064 "seek_data": false, 00:15:28.064 "copy": true, 00:15:28.064 "nvme_iov_md": false 00:15:28.064 }, 00:15:28.064 "memory_domains": [ 00:15:28.064 { 00:15:28.064 "dma_device_id": "system", 00:15:28.064 "dma_device_type": 1 00:15:28.064 }, 00:15:28.064 { 00:15:28.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.064 "dma_device_type": 2 00:15:28.064 } 00:15:28.064 ], 00:15:28.064 "driver_specific": {} 00:15:28.064 } 00:15:28.064 ] 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.064 "name": "Existed_Raid", 00:15:28.064 "uuid": "ad5ff17d-4075-467e-bc84-a65f291cd6af", 00:15:28.064 "strip_size_kb": 64, 00:15:28.064 "state": "online", 00:15:28.064 "raid_level": "raid5f", 00:15:28.064 "superblock": true, 00:15:28.064 "num_base_bdevs": 3, 00:15:28.064 "num_base_bdevs_discovered": 3, 00:15:28.064 "num_base_bdevs_operational": 3, 00:15:28.064 "base_bdevs_list": [ 00:15:28.064 { 00:15:28.064 "name": "NewBaseBdev", 00:15:28.064 "uuid": "cf5f88c7-5723-43af-a3e5-f124f51c8db4", 00:15:28.064 "is_configured": true, 00:15:28.064 "data_offset": 2048, 00:15:28.064 "data_size": 63488 00:15:28.064 }, 00:15:28.064 { 00:15:28.064 "name": "BaseBdev2", 00:15:28.064 "uuid": "0082e39f-d130-4732-b0c3-8a90eea3a1dd", 00:15:28.064 "is_configured": true, 00:15:28.064 "data_offset": 2048, 00:15:28.064 "data_size": 63488 00:15:28.064 }, 00:15:28.064 { 00:15:28.064 "name": "BaseBdev3", 00:15:28.064 "uuid": "41e00fc4-d7d3-4378-9c81-a55a6628ed06", 00:15:28.064 "is_configured": true, 00:15:28.064 "data_offset": 2048, 00:15:28.064 "data_size": 63488 00:15:28.064 } 00:15:28.064 ] 00:15:28.064 }' 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.064 07:47:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.324 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:28.324 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:28.324 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:28.324 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:28.324 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:28.324 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:28.324 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:28.324 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:28.324 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.324 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.324 [2024-11-29 07:47:18.221894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.324 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.324 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.324 "name": "Existed_Raid", 00:15:28.324 "aliases": [ 00:15:28.324 "ad5ff17d-4075-467e-bc84-a65f291cd6af" 00:15:28.324 ], 00:15:28.324 "product_name": "Raid Volume", 00:15:28.324 "block_size": 512, 00:15:28.324 "num_blocks": 126976, 00:15:28.324 "uuid": "ad5ff17d-4075-467e-bc84-a65f291cd6af", 00:15:28.324 "assigned_rate_limits": { 00:15:28.324 "rw_ios_per_sec": 0, 00:15:28.324 "rw_mbytes_per_sec": 0, 00:15:28.324 "r_mbytes_per_sec": 0, 00:15:28.324 "w_mbytes_per_sec": 0 00:15:28.324 }, 00:15:28.324 "claimed": false, 00:15:28.324 "zoned": false, 00:15:28.324 "supported_io_types": { 00:15:28.324 "read": true, 00:15:28.324 "write": true, 00:15:28.324 "unmap": false, 00:15:28.324 "flush": false, 00:15:28.324 "reset": true, 00:15:28.324 "nvme_admin": false, 00:15:28.324 "nvme_io": false, 00:15:28.324 "nvme_io_md": false, 00:15:28.324 "write_zeroes": true, 00:15:28.324 "zcopy": false, 00:15:28.324 "get_zone_info": false, 00:15:28.324 "zone_management": false, 00:15:28.324 "zone_append": false, 00:15:28.324 "compare": false, 00:15:28.324 "compare_and_write": false, 00:15:28.324 "abort": false, 00:15:28.324 "seek_hole": false, 00:15:28.324 "seek_data": false, 00:15:28.324 "copy": false, 00:15:28.324 "nvme_iov_md": false 00:15:28.324 }, 00:15:28.324 "driver_specific": { 00:15:28.324 "raid": { 00:15:28.324 "uuid": "ad5ff17d-4075-467e-bc84-a65f291cd6af", 00:15:28.324 "strip_size_kb": 64, 00:15:28.324 "state": "online", 00:15:28.325 "raid_level": "raid5f", 00:15:28.325 "superblock": true, 00:15:28.325 "num_base_bdevs": 3, 00:15:28.325 "num_base_bdevs_discovered": 3, 00:15:28.325 "num_base_bdevs_operational": 3, 00:15:28.325 "base_bdevs_list": [ 00:15:28.325 { 00:15:28.325 "name": "NewBaseBdev", 00:15:28.325 "uuid": "cf5f88c7-5723-43af-a3e5-f124f51c8db4", 00:15:28.325 "is_configured": true, 00:15:28.325 "data_offset": 2048, 00:15:28.325 "data_size": 63488 00:15:28.325 }, 00:15:28.325 { 00:15:28.325 "name": "BaseBdev2", 00:15:28.325 "uuid": "0082e39f-d130-4732-b0c3-8a90eea3a1dd", 00:15:28.325 "is_configured": true, 00:15:28.325 "data_offset": 2048, 00:15:28.325 "data_size": 63488 00:15:28.325 }, 00:15:28.325 { 00:15:28.325 "name": "BaseBdev3", 00:15:28.325 "uuid": "41e00fc4-d7d3-4378-9c81-a55a6628ed06", 00:15:28.325 "is_configured": true, 00:15:28.325 "data_offset": 2048, 00:15:28.325 "data_size": 63488 00:15:28.325 } 00:15:28.325 ] 00:15:28.325 } 00:15:28.325 } 00:15:28.325 }' 00:15:28.325 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:28.585 BaseBdev2 00:15:28.585 BaseBdev3' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.585 [2024-11-29 07:47:18.485227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.585 [2024-11-29 07:47:18.485254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.585 [2024-11-29 07:47:18.485321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.585 [2024-11-29 07:47:18.485591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.585 [2024-11-29 07:47:18.485611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80212 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80212 ']' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80212 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80212 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.585 killing process with pid 80212 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80212' 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80212 00:15:28.585 [2024-11-29 07:47:18.520112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.585 07:47:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80212 00:15:29.156 [2024-11-29 07:47:18.802117] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.095 07:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:30.095 00:15:30.095 real 0m10.185s 00:15:30.095 user 0m16.153s 00:15:30.095 sys 0m1.886s 00:15:30.095 07:47:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.095 07:47:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.095 ************************************ 00:15:30.095 END TEST raid5f_state_function_test_sb 00:15:30.095 ************************************ 00:15:30.095 07:47:19 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:30.095 07:47:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:30.095 07:47:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.095 07:47:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.095 ************************************ 00:15:30.095 START TEST raid5f_superblock_test 00:15:30.095 ************************************ 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80827 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80827 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80827 ']' 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.095 07:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.354 [2024-11-29 07:47:20.039775] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:30.354 [2024-11-29 07:47:20.039890] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80827 ] 00:15:30.354 [2024-11-29 07:47:20.214299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.614 [2024-11-29 07:47:20.323111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.614 [2024-11-29 07:47:20.518147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.614 [2024-11-29 07:47:20.518207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.183 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.184 malloc1 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.184 [2024-11-29 07:47:20.921659] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:31.184 [2024-11-29 07:47:20.921735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.184 [2024-11-29 07:47:20.921755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:31.184 [2024-11-29 07:47:20.921765] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.184 [2024-11-29 07:47:20.923773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.184 [2024-11-29 07:47:20.923809] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:31.184 pt1 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.184 malloc2 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.184 [2024-11-29 07:47:20.972876] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:31.184 [2024-11-29 07:47:20.972926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.184 [2024-11-29 07:47:20.972966] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:31.184 [2024-11-29 07:47:20.972974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.184 [2024-11-29 07:47:20.974954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.184 [2024-11-29 07:47:20.974989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:31.184 pt2 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.184 07:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.184 malloc3 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.184 [2024-11-29 07:47:21.058361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:31.184 [2024-11-29 07:47:21.058413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.184 [2024-11-29 07:47:21.058434] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:31.184 [2024-11-29 07:47:21.058445] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.184 [2024-11-29 07:47:21.060482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.184 [2024-11-29 07:47:21.060517] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:31.184 pt3 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.184 [2024-11-29 07:47:21.070393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:31.184 [2024-11-29 07:47:21.072167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:31.184 [2024-11-29 07:47:21.072233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:31.184 [2024-11-29 07:47:21.072394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:31.184 [2024-11-29 07:47:21.072413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:31.184 [2024-11-29 07:47:21.072646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:31.184 [2024-11-29 07:47:21.078385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:31.184 [2024-11-29 07:47:21.078408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:31.184 [2024-11-29 07:47:21.078606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.184 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.185 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.185 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.185 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.445 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.445 "name": "raid_bdev1", 00:15:31.445 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:31.445 "strip_size_kb": 64, 00:15:31.445 "state": "online", 00:15:31.445 "raid_level": "raid5f", 00:15:31.445 "superblock": true, 00:15:31.445 "num_base_bdevs": 3, 00:15:31.445 "num_base_bdevs_discovered": 3, 00:15:31.445 "num_base_bdevs_operational": 3, 00:15:31.445 "base_bdevs_list": [ 00:15:31.445 { 00:15:31.445 "name": "pt1", 00:15:31.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.445 "is_configured": true, 00:15:31.445 "data_offset": 2048, 00:15:31.445 "data_size": 63488 00:15:31.445 }, 00:15:31.445 { 00:15:31.445 "name": "pt2", 00:15:31.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.445 "is_configured": true, 00:15:31.445 "data_offset": 2048, 00:15:31.445 "data_size": 63488 00:15:31.445 }, 00:15:31.445 { 00:15:31.445 "name": "pt3", 00:15:31.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:31.445 "is_configured": true, 00:15:31.445 "data_offset": 2048, 00:15:31.445 "data_size": 63488 00:15:31.445 } 00:15:31.445 ] 00:15:31.445 }' 00:15:31.445 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.445 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.705 [2024-11-29 07:47:21.536295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:31.705 "name": "raid_bdev1", 00:15:31.705 "aliases": [ 00:15:31.705 "96d8e9d7-2699-4421-94ee-227a722df92b" 00:15:31.705 ], 00:15:31.705 "product_name": "Raid Volume", 00:15:31.705 "block_size": 512, 00:15:31.705 "num_blocks": 126976, 00:15:31.705 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:31.705 "assigned_rate_limits": { 00:15:31.705 "rw_ios_per_sec": 0, 00:15:31.705 "rw_mbytes_per_sec": 0, 00:15:31.705 "r_mbytes_per_sec": 0, 00:15:31.705 "w_mbytes_per_sec": 0 00:15:31.705 }, 00:15:31.705 "claimed": false, 00:15:31.705 "zoned": false, 00:15:31.705 "supported_io_types": { 00:15:31.705 "read": true, 00:15:31.705 "write": true, 00:15:31.705 "unmap": false, 00:15:31.705 "flush": false, 00:15:31.705 "reset": true, 00:15:31.705 "nvme_admin": false, 00:15:31.705 "nvme_io": false, 00:15:31.705 "nvme_io_md": false, 00:15:31.705 "write_zeroes": true, 00:15:31.705 "zcopy": false, 00:15:31.705 "get_zone_info": false, 00:15:31.705 "zone_management": false, 00:15:31.705 "zone_append": false, 00:15:31.705 "compare": false, 00:15:31.705 "compare_and_write": false, 00:15:31.705 "abort": false, 00:15:31.705 "seek_hole": false, 00:15:31.705 "seek_data": false, 00:15:31.705 "copy": false, 00:15:31.705 "nvme_iov_md": false 00:15:31.705 }, 00:15:31.705 "driver_specific": { 00:15:31.705 "raid": { 00:15:31.705 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:31.705 "strip_size_kb": 64, 00:15:31.705 "state": "online", 00:15:31.705 "raid_level": "raid5f", 00:15:31.705 "superblock": true, 00:15:31.705 "num_base_bdevs": 3, 00:15:31.705 "num_base_bdevs_discovered": 3, 00:15:31.705 "num_base_bdevs_operational": 3, 00:15:31.705 "base_bdevs_list": [ 00:15:31.705 { 00:15:31.705 "name": "pt1", 00:15:31.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.705 "is_configured": true, 00:15:31.705 "data_offset": 2048, 00:15:31.705 "data_size": 63488 00:15:31.705 }, 00:15:31.705 { 00:15:31.705 "name": "pt2", 00:15:31.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.705 "is_configured": true, 00:15:31.705 "data_offset": 2048, 00:15:31.705 "data_size": 63488 00:15:31.705 }, 00:15:31.705 { 00:15:31.705 "name": "pt3", 00:15:31.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:31.705 "is_configured": true, 00:15:31.705 "data_offset": 2048, 00:15:31.705 "data_size": 63488 00:15:31.705 } 00:15:31.705 ] 00:15:31.705 } 00:15:31.705 } 00:15:31.705 }' 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:31.705 pt2 00:15:31.705 pt3' 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.705 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.966 [2024-11-29 07:47:21.787881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=96d8e9d7-2699-4421-94ee-227a722df92b 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 96d8e9d7-2699-4421-94ee-227a722df92b ']' 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.966 [2024-11-29 07:47:21.831631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.966 [2024-11-29 07:47:21.831659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.966 [2024-11-29 07:47:21.831721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.966 [2024-11-29 07:47:21.831790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.966 [2024-11-29 07:47:21.831799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.966 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.226 [2024-11-29 07:47:21.967443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:32.226 [2024-11-29 07:47:21.969221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:32.226 [2024-11-29 07:47:21.969275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:32.226 [2024-11-29 07:47:21.969322] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:32.226 [2024-11-29 07:47:21.969363] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:32.226 [2024-11-29 07:47:21.969381] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:32.226 [2024-11-29 07:47:21.969395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.226 [2024-11-29 07:47:21.969403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:32.226 request: 00:15:32.226 { 00:15:32.226 "name": "raid_bdev1", 00:15:32.226 "raid_level": "raid5f", 00:15:32.226 "base_bdevs": [ 00:15:32.226 "malloc1", 00:15:32.226 "malloc2", 00:15:32.226 "malloc3" 00:15:32.226 ], 00:15:32.226 "strip_size_kb": 64, 00:15:32.226 "superblock": false, 00:15:32.226 "method": "bdev_raid_create", 00:15:32.226 "req_id": 1 00:15:32.226 } 00:15:32.226 Got JSON-RPC error response 00:15:32.226 response: 00:15:32.226 { 00:15:32.226 "code": -17, 00:15:32.226 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:32.226 } 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.226 07:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.226 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:32.226 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:32.226 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:32.226 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.226 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.226 [2024-11-29 07:47:22.015311] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:32.226 [2024-11-29 07:47:22.015355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.226 [2024-11-29 07:47:22.015372] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:32.226 [2024-11-29 07:47:22.015379] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.226 [2024-11-29 07:47:22.017490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.226 [2024-11-29 07:47:22.017522] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:32.226 [2024-11-29 07:47:22.017587] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:32.226 [2024-11-29 07:47:22.017634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:32.226 pt1 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.227 "name": "raid_bdev1", 00:15:32.227 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:32.227 "strip_size_kb": 64, 00:15:32.227 "state": "configuring", 00:15:32.227 "raid_level": "raid5f", 00:15:32.227 "superblock": true, 00:15:32.227 "num_base_bdevs": 3, 00:15:32.227 "num_base_bdevs_discovered": 1, 00:15:32.227 "num_base_bdevs_operational": 3, 00:15:32.227 "base_bdevs_list": [ 00:15:32.227 { 00:15:32.227 "name": "pt1", 00:15:32.227 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.227 "is_configured": true, 00:15:32.227 "data_offset": 2048, 00:15:32.227 "data_size": 63488 00:15:32.227 }, 00:15:32.227 { 00:15:32.227 "name": null, 00:15:32.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.227 "is_configured": false, 00:15:32.227 "data_offset": 2048, 00:15:32.227 "data_size": 63488 00:15:32.227 }, 00:15:32.227 { 00:15:32.227 "name": null, 00:15:32.227 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:32.227 "is_configured": false, 00:15:32.227 "data_offset": 2048, 00:15:32.227 "data_size": 63488 00:15:32.227 } 00:15:32.227 ] 00:15:32.227 }' 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.227 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.796 [2024-11-29 07:47:22.462582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:32.796 [2024-11-29 07:47:22.462640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.796 [2024-11-29 07:47:22.462660] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:32.796 [2024-11-29 07:47:22.462669] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.796 [2024-11-29 07:47:22.463058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.796 [2024-11-29 07:47:22.463081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:32.796 [2024-11-29 07:47:22.463179] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:32.796 [2024-11-29 07:47:22.463206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:32.796 pt2 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.796 [2024-11-29 07:47:22.474562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.796 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.797 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.797 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.797 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.797 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.797 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.797 "name": "raid_bdev1", 00:15:32.797 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:32.797 "strip_size_kb": 64, 00:15:32.797 "state": "configuring", 00:15:32.797 "raid_level": "raid5f", 00:15:32.797 "superblock": true, 00:15:32.797 "num_base_bdevs": 3, 00:15:32.797 "num_base_bdevs_discovered": 1, 00:15:32.797 "num_base_bdevs_operational": 3, 00:15:32.797 "base_bdevs_list": [ 00:15:32.797 { 00:15:32.797 "name": "pt1", 00:15:32.797 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.797 "is_configured": true, 00:15:32.797 "data_offset": 2048, 00:15:32.797 "data_size": 63488 00:15:32.797 }, 00:15:32.797 { 00:15:32.797 "name": null, 00:15:32.797 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.797 "is_configured": false, 00:15:32.797 "data_offset": 0, 00:15:32.797 "data_size": 63488 00:15:32.797 }, 00:15:32.797 { 00:15:32.797 "name": null, 00:15:32.797 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:32.797 "is_configured": false, 00:15:32.797 "data_offset": 2048, 00:15:32.797 "data_size": 63488 00:15:32.797 } 00:15:32.797 ] 00:15:32.797 }' 00:15:32.797 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.797 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.057 [2024-11-29 07:47:22.885863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:33.057 [2024-11-29 07:47:22.885946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.057 [2024-11-29 07:47:22.885963] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:33.057 [2024-11-29 07:47:22.885974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.057 [2024-11-29 07:47:22.886415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.057 [2024-11-29 07:47:22.886445] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:33.057 [2024-11-29 07:47:22.886522] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:33.057 [2024-11-29 07:47:22.886546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:33.057 pt2 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.057 [2024-11-29 07:47:22.897823] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:33.057 [2024-11-29 07:47:22.897883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.057 [2024-11-29 07:47:22.897897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:33.057 [2024-11-29 07:47:22.897906] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.057 [2024-11-29 07:47:22.898265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.057 [2024-11-29 07:47:22.898293] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:33.057 [2024-11-29 07:47:22.898349] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:33.057 [2024-11-29 07:47:22.898369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:33.057 [2024-11-29 07:47:22.898497] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:33.057 [2024-11-29 07:47:22.898518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:33.057 [2024-11-29 07:47:22.898741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:33.057 [2024-11-29 07:47:22.904008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:33.057 [2024-11-29 07:47:22.904031] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:33.057 [2024-11-29 07:47:22.904217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.057 pt3 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.057 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.057 "name": "raid_bdev1", 00:15:33.057 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:33.057 "strip_size_kb": 64, 00:15:33.057 "state": "online", 00:15:33.057 "raid_level": "raid5f", 00:15:33.057 "superblock": true, 00:15:33.057 "num_base_bdevs": 3, 00:15:33.057 "num_base_bdevs_discovered": 3, 00:15:33.057 "num_base_bdevs_operational": 3, 00:15:33.057 "base_bdevs_list": [ 00:15:33.057 { 00:15:33.057 "name": "pt1", 00:15:33.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.057 "is_configured": true, 00:15:33.057 "data_offset": 2048, 00:15:33.058 "data_size": 63488 00:15:33.058 }, 00:15:33.058 { 00:15:33.058 "name": "pt2", 00:15:33.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.058 "is_configured": true, 00:15:33.058 "data_offset": 2048, 00:15:33.058 "data_size": 63488 00:15:33.058 }, 00:15:33.058 { 00:15:33.058 "name": "pt3", 00:15:33.058 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.058 "is_configured": true, 00:15:33.058 "data_offset": 2048, 00:15:33.058 "data_size": 63488 00:15:33.058 } 00:15:33.058 ] 00:15:33.058 }' 00:15:33.058 07:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.058 07:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.629 [2024-11-29 07:47:23.334071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:33.629 "name": "raid_bdev1", 00:15:33.629 "aliases": [ 00:15:33.629 "96d8e9d7-2699-4421-94ee-227a722df92b" 00:15:33.629 ], 00:15:33.629 "product_name": "Raid Volume", 00:15:33.629 "block_size": 512, 00:15:33.629 "num_blocks": 126976, 00:15:33.629 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:33.629 "assigned_rate_limits": { 00:15:33.629 "rw_ios_per_sec": 0, 00:15:33.629 "rw_mbytes_per_sec": 0, 00:15:33.629 "r_mbytes_per_sec": 0, 00:15:33.629 "w_mbytes_per_sec": 0 00:15:33.629 }, 00:15:33.629 "claimed": false, 00:15:33.629 "zoned": false, 00:15:33.629 "supported_io_types": { 00:15:33.629 "read": true, 00:15:33.629 "write": true, 00:15:33.629 "unmap": false, 00:15:33.629 "flush": false, 00:15:33.629 "reset": true, 00:15:33.629 "nvme_admin": false, 00:15:33.629 "nvme_io": false, 00:15:33.629 "nvme_io_md": false, 00:15:33.629 "write_zeroes": true, 00:15:33.629 "zcopy": false, 00:15:33.629 "get_zone_info": false, 00:15:33.629 "zone_management": false, 00:15:33.629 "zone_append": false, 00:15:33.629 "compare": false, 00:15:33.629 "compare_and_write": false, 00:15:33.629 "abort": false, 00:15:33.629 "seek_hole": false, 00:15:33.629 "seek_data": false, 00:15:33.629 "copy": false, 00:15:33.629 "nvme_iov_md": false 00:15:33.629 }, 00:15:33.629 "driver_specific": { 00:15:33.629 "raid": { 00:15:33.629 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:33.629 "strip_size_kb": 64, 00:15:33.629 "state": "online", 00:15:33.629 "raid_level": "raid5f", 00:15:33.629 "superblock": true, 00:15:33.629 "num_base_bdevs": 3, 00:15:33.629 "num_base_bdevs_discovered": 3, 00:15:33.629 "num_base_bdevs_operational": 3, 00:15:33.629 "base_bdevs_list": [ 00:15:33.629 { 00:15:33.629 "name": "pt1", 00:15:33.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.629 "is_configured": true, 00:15:33.629 "data_offset": 2048, 00:15:33.629 "data_size": 63488 00:15:33.629 }, 00:15:33.629 { 00:15:33.629 "name": "pt2", 00:15:33.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.629 "is_configured": true, 00:15:33.629 "data_offset": 2048, 00:15:33.629 "data_size": 63488 00:15:33.629 }, 00:15:33.629 { 00:15:33.629 "name": "pt3", 00:15:33.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.629 "is_configured": true, 00:15:33.629 "data_offset": 2048, 00:15:33.629 "data_size": 63488 00:15:33.629 } 00:15:33.629 ] 00:15:33.629 } 00:15:33.629 } 00:15:33.629 }' 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:33.629 pt2 00:15:33.629 pt3' 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.629 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.889 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.889 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.889 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.889 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.890 [2024-11-29 07:47:23.621542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 96d8e9d7-2699-4421-94ee-227a722df92b '!=' 96d8e9d7-2699-4421-94ee-227a722df92b ']' 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.890 [2024-11-29 07:47:23.665338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.890 "name": "raid_bdev1", 00:15:33.890 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:33.890 "strip_size_kb": 64, 00:15:33.890 "state": "online", 00:15:33.890 "raid_level": "raid5f", 00:15:33.890 "superblock": true, 00:15:33.890 "num_base_bdevs": 3, 00:15:33.890 "num_base_bdevs_discovered": 2, 00:15:33.890 "num_base_bdevs_operational": 2, 00:15:33.890 "base_bdevs_list": [ 00:15:33.890 { 00:15:33.890 "name": null, 00:15:33.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.890 "is_configured": false, 00:15:33.890 "data_offset": 0, 00:15:33.890 "data_size": 63488 00:15:33.890 }, 00:15:33.890 { 00:15:33.890 "name": "pt2", 00:15:33.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.890 "is_configured": true, 00:15:33.890 "data_offset": 2048, 00:15:33.890 "data_size": 63488 00:15:33.890 }, 00:15:33.890 { 00:15:33.890 "name": "pt3", 00:15:33.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.890 "is_configured": true, 00:15:33.890 "data_offset": 2048, 00:15:33.890 "data_size": 63488 00:15:33.890 } 00:15:33.890 ] 00:15:33.890 }' 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.890 07:47:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.461 [2024-11-29 07:47:24.124578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.461 [2024-11-29 07:47:24.124603] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.461 [2024-11-29 07:47:24.124667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.461 [2024-11-29 07:47:24.124720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.461 [2024-11-29 07:47:24.124733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.461 [2024-11-29 07:47:24.208416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:34.461 [2024-11-29 07:47:24.208468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.461 [2024-11-29 07:47:24.208484] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:34.461 [2024-11-29 07:47:24.208493] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.461 [2024-11-29 07:47:24.210508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.461 [2024-11-29 07:47:24.210548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:34.461 [2024-11-29 07:47:24.210619] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:34.461 [2024-11-29 07:47:24.210665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:34.461 pt2 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.461 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.461 "name": "raid_bdev1", 00:15:34.461 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:34.461 "strip_size_kb": 64, 00:15:34.461 "state": "configuring", 00:15:34.461 "raid_level": "raid5f", 00:15:34.461 "superblock": true, 00:15:34.461 "num_base_bdevs": 3, 00:15:34.461 "num_base_bdevs_discovered": 1, 00:15:34.462 "num_base_bdevs_operational": 2, 00:15:34.462 "base_bdevs_list": [ 00:15:34.462 { 00:15:34.462 "name": null, 00:15:34.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.462 "is_configured": false, 00:15:34.462 "data_offset": 2048, 00:15:34.462 "data_size": 63488 00:15:34.462 }, 00:15:34.462 { 00:15:34.462 "name": "pt2", 00:15:34.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.462 "is_configured": true, 00:15:34.462 "data_offset": 2048, 00:15:34.462 "data_size": 63488 00:15:34.462 }, 00:15:34.462 { 00:15:34.462 "name": null, 00:15:34.462 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.462 "is_configured": false, 00:15:34.462 "data_offset": 2048, 00:15:34.462 "data_size": 63488 00:15:34.462 } 00:15:34.462 ] 00:15:34.462 }' 00:15:34.462 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.462 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.722 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:34.722 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:34.722 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:34.722 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:34.722 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.722 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.722 [2024-11-29 07:47:24.663736] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:34.722 [2024-11-29 07:47:24.663865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.722 [2024-11-29 07:47:24.663941] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:34.722 [2024-11-29 07:47:24.663987] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.722 [2024-11-29 07:47:24.664552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.722 [2024-11-29 07:47:24.664620] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:34.722 [2024-11-29 07:47:24.664737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:34.722 [2024-11-29 07:47:24.664797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:34.722 [2024-11-29 07:47:24.664952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:34.722 [2024-11-29 07:47:24.664999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:34.722 [2024-11-29 07:47:24.665297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:34.982 [2024-11-29 07:47:24.671123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:34.982 [2024-11-29 07:47:24.671176] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:34.982 [2024-11-29 07:47:24.671518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.982 pt3 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.982 "name": "raid_bdev1", 00:15:34.982 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:34.982 "strip_size_kb": 64, 00:15:34.982 "state": "online", 00:15:34.982 "raid_level": "raid5f", 00:15:34.982 "superblock": true, 00:15:34.982 "num_base_bdevs": 3, 00:15:34.982 "num_base_bdevs_discovered": 2, 00:15:34.982 "num_base_bdevs_operational": 2, 00:15:34.982 "base_bdevs_list": [ 00:15:34.982 { 00:15:34.982 "name": null, 00:15:34.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.982 "is_configured": false, 00:15:34.982 "data_offset": 2048, 00:15:34.982 "data_size": 63488 00:15:34.982 }, 00:15:34.982 { 00:15:34.982 "name": "pt2", 00:15:34.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.982 "is_configured": true, 00:15:34.982 "data_offset": 2048, 00:15:34.982 "data_size": 63488 00:15:34.982 }, 00:15:34.982 { 00:15:34.982 "name": "pt3", 00:15:34.982 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.982 "is_configured": true, 00:15:34.982 "data_offset": 2048, 00:15:34.982 "data_size": 63488 00:15:34.982 } 00:15:34.982 ] 00:15:34.982 }' 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.982 07:47:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.242 [2024-11-29 07:47:25.065863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.242 [2024-11-29 07:47:25.065892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.242 [2024-11-29 07:47:25.065961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.242 [2024-11-29 07:47:25.066021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.242 [2024-11-29 07:47:25.066030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.242 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.242 [2024-11-29 07:47:25.133752] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.242 [2024-11-29 07:47:25.133855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.243 [2024-11-29 07:47:25.133890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:35.243 [2024-11-29 07:47:25.133916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.243 [2024-11-29 07:47:25.136150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.243 [2024-11-29 07:47:25.136219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.243 [2024-11-29 07:47:25.136320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:35.243 [2024-11-29 07:47:25.136393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.243 [2024-11-29 07:47:25.136584] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:35.243 [2024-11-29 07:47:25.136642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.243 [2024-11-29 07:47:25.136679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:35.243 [2024-11-29 07:47:25.136785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:35.243 pt1 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.243 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.509 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.509 "name": "raid_bdev1", 00:15:35.509 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:35.509 "strip_size_kb": 64, 00:15:35.509 "state": "configuring", 00:15:35.509 "raid_level": "raid5f", 00:15:35.509 "superblock": true, 00:15:35.509 "num_base_bdevs": 3, 00:15:35.509 "num_base_bdevs_discovered": 1, 00:15:35.509 "num_base_bdevs_operational": 2, 00:15:35.510 "base_bdevs_list": [ 00:15:35.510 { 00:15:35.510 "name": null, 00:15:35.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.510 "is_configured": false, 00:15:35.510 "data_offset": 2048, 00:15:35.510 "data_size": 63488 00:15:35.510 }, 00:15:35.510 { 00:15:35.510 "name": "pt2", 00:15:35.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.510 "is_configured": true, 00:15:35.510 "data_offset": 2048, 00:15:35.510 "data_size": 63488 00:15:35.510 }, 00:15:35.510 { 00:15:35.510 "name": null, 00:15:35.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.510 "is_configured": false, 00:15:35.510 "data_offset": 2048, 00:15:35.510 "data_size": 63488 00:15:35.510 } 00:15:35.510 ] 00:15:35.510 }' 00:15:35.510 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.510 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.785 [2024-11-29 07:47:25.621006] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:35.785 [2024-11-29 07:47:25.621120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.785 [2024-11-29 07:47:25.621145] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:35.785 [2024-11-29 07:47:25.621155] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.785 [2024-11-29 07:47:25.621603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.785 [2024-11-29 07:47:25.621621] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:35.785 [2024-11-29 07:47:25.621699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:35.785 [2024-11-29 07:47:25.621719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:35.785 [2024-11-29 07:47:25.621856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:35.785 [2024-11-29 07:47:25.621864] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:35.785 [2024-11-29 07:47:25.622127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:35.785 [2024-11-29 07:47:25.628131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:35.785 pt3 00:15:35.785 [2024-11-29 07:47:25.628199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:35.785 [2024-11-29 07:47:25.628439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.785 "name": "raid_bdev1", 00:15:35.785 "uuid": "96d8e9d7-2699-4421-94ee-227a722df92b", 00:15:35.785 "strip_size_kb": 64, 00:15:35.785 "state": "online", 00:15:35.785 "raid_level": "raid5f", 00:15:35.785 "superblock": true, 00:15:35.785 "num_base_bdevs": 3, 00:15:35.785 "num_base_bdevs_discovered": 2, 00:15:35.785 "num_base_bdevs_operational": 2, 00:15:35.785 "base_bdevs_list": [ 00:15:35.785 { 00:15:35.785 "name": null, 00:15:35.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.785 "is_configured": false, 00:15:35.785 "data_offset": 2048, 00:15:35.785 "data_size": 63488 00:15:35.785 }, 00:15:35.785 { 00:15:35.785 "name": "pt2", 00:15:35.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.785 "is_configured": true, 00:15:35.785 "data_offset": 2048, 00:15:35.785 "data_size": 63488 00:15:35.785 }, 00:15:35.785 { 00:15:35.785 "name": "pt3", 00:15:35.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.785 "is_configured": true, 00:15:35.785 "data_offset": 2048, 00:15:35.785 "data_size": 63488 00:15:35.785 } 00:15:35.785 ] 00:15:35.785 }' 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.785 07:47:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.375 [2024-11-29 07:47:26.158271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 96d8e9d7-2699-4421-94ee-227a722df92b '!=' 96d8e9d7-2699-4421-94ee-227a722df92b ']' 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80827 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80827 ']' 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80827 00:15:36.375 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:36.376 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.376 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80827 00:15:36.376 killing process with pid 80827 00:15:36.376 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.376 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.376 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80827' 00:15:36.376 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80827 00:15:36.376 [2024-11-29 07:47:26.221390] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.376 [2024-11-29 07:47:26.221480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.376 [2024-11-29 07:47:26.221547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.376 [2024-11-29 07:47:26.221561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:36.376 07:47:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80827 00:15:36.636 [2024-11-29 07:47:26.505511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.020 07:47:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:38.020 00:15:38.020 real 0m7.623s 00:15:38.020 user 0m11.915s 00:15:38.020 sys 0m1.431s 00:15:38.020 07:47:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.020 07:47:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.020 ************************************ 00:15:38.020 END TEST raid5f_superblock_test 00:15:38.020 ************************************ 00:15:38.020 07:47:27 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:38.020 07:47:27 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:38.020 07:47:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:38.020 07:47:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.020 07:47:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.020 ************************************ 00:15:38.020 START TEST raid5f_rebuild_test 00:15:38.020 ************************************ 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:38.020 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:38.021 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81271 00:15:38.021 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:38.021 07:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81271 00:15:38.021 07:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81271 ']' 00:15:38.021 07:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.021 07:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.021 07:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.021 07:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.021 07:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.021 [2024-11-29 07:47:27.747005] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:38.021 [2024-11-29 07:47:27.747240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:38.021 Zero copy mechanism will not be used. 00:15:38.021 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81271 ] 00:15:38.021 [2024-11-29 07:47:27.916082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.281 [2024-11-29 07:47:28.023310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.281 [2024-11-29 07:47:28.213080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.281 [2024-11-29 07:47:28.213118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 BaseBdev1_malloc 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 [2024-11-29 07:47:28.599668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:38.852 [2024-11-29 07:47:28.599731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.852 [2024-11-29 07:47:28.599768] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:38.852 [2024-11-29 07:47:28.599779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.852 [2024-11-29 07:47:28.601834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.852 [2024-11-29 07:47:28.601875] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:38.852 BaseBdev1 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 BaseBdev2_malloc 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 [2024-11-29 07:47:28.649258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:38.852 [2024-11-29 07:47:28.649369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.852 [2024-11-29 07:47:28.649393] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:38.852 [2024-11-29 07:47:28.649404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.852 [2024-11-29 07:47:28.651391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.852 [2024-11-29 07:47:28.651432] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:38.852 BaseBdev2 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 BaseBdev3_malloc 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 [2024-11-29 07:47:28.716214] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:38.852 [2024-11-29 07:47:28.716322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.852 [2024-11-29 07:47:28.716348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:38.852 [2024-11-29 07:47:28.716358] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.852 [2024-11-29 07:47:28.718344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.852 [2024-11-29 07:47:28.718382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:38.852 BaseBdev3 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 spare_malloc 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 spare_delay 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 [2024-11-29 07:47:28.781064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:38.852 [2024-11-29 07:47:28.781126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.852 [2024-11-29 07:47:28.781142] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:38.852 [2024-11-29 07:47:28.781152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.852 [2024-11-29 07:47:28.783145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.852 [2024-11-29 07:47:28.783239] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:38.852 spare 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 [2024-11-29 07:47:28.793110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.852 [2024-11-29 07:47:28.794907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.852 [2024-11-29 07:47:28.795011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.852 [2024-11-29 07:47:28.795159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:38.852 [2024-11-29 07:47:28.795203] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:38.852 [2024-11-29 07:47:28.795468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:39.113 [2024-11-29 07:47:28.801078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:39.113 [2024-11-29 07:47:28.801145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:39.113 [2024-11-29 07:47:28.801345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.113 "name": "raid_bdev1", 00:15:39.113 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:39.113 "strip_size_kb": 64, 00:15:39.113 "state": "online", 00:15:39.113 "raid_level": "raid5f", 00:15:39.113 "superblock": false, 00:15:39.113 "num_base_bdevs": 3, 00:15:39.113 "num_base_bdevs_discovered": 3, 00:15:39.113 "num_base_bdevs_operational": 3, 00:15:39.113 "base_bdevs_list": [ 00:15:39.113 { 00:15:39.113 "name": "BaseBdev1", 00:15:39.113 "uuid": "a0d55fcb-f343-52dd-ab07-381b6747ab4f", 00:15:39.113 "is_configured": true, 00:15:39.113 "data_offset": 0, 00:15:39.113 "data_size": 65536 00:15:39.113 }, 00:15:39.113 { 00:15:39.113 "name": "BaseBdev2", 00:15:39.113 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:39.113 "is_configured": true, 00:15:39.113 "data_offset": 0, 00:15:39.113 "data_size": 65536 00:15:39.113 }, 00:15:39.113 { 00:15:39.113 "name": "BaseBdev3", 00:15:39.113 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:39.113 "is_configured": true, 00:15:39.113 "data_offset": 0, 00:15:39.113 "data_size": 65536 00:15:39.113 } 00:15:39.113 ] 00:15:39.113 }' 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.113 07:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.373 [2024-11-29 07:47:29.215092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:39.373 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:39.633 [2024-11-29 07:47:29.498435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:39.633 /dev/nbd0 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.633 1+0 records in 00:15:39.633 1+0 records out 00:15:39.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003883 s, 10.5 MB/s 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:39.633 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:40.203 512+0 records in 00:15:40.203 512+0 records out 00:15:40.203 67108864 bytes (67 MB, 64 MiB) copied, 0.350746 s, 191 MB/s 00:15:40.203 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:40.203 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.203 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:40.203 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.203 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:40.203 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.203 07:47:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.203 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.203 [2024-11-29 07:47:30.136246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.203 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.203 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.203 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.203 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.203 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.463 [2024-11-29 07:47:30.155422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.463 "name": "raid_bdev1", 00:15:40.463 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:40.463 "strip_size_kb": 64, 00:15:40.463 "state": "online", 00:15:40.463 "raid_level": "raid5f", 00:15:40.463 "superblock": false, 00:15:40.463 "num_base_bdevs": 3, 00:15:40.463 "num_base_bdevs_discovered": 2, 00:15:40.463 "num_base_bdevs_operational": 2, 00:15:40.463 "base_bdevs_list": [ 00:15:40.463 { 00:15:40.463 "name": null, 00:15:40.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.463 "is_configured": false, 00:15:40.463 "data_offset": 0, 00:15:40.463 "data_size": 65536 00:15:40.463 }, 00:15:40.463 { 00:15:40.463 "name": "BaseBdev2", 00:15:40.463 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:40.463 "is_configured": true, 00:15:40.463 "data_offset": 0, 00:15:40.463 "data_size": 65536 00:15:40.463 }, 00:15:40.463 { 00:15:40.463 "name": "BaseBdev3", 00:15:40.463 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:40.463 "is_configured": true, 00:15:40.463 "data_offset": 0, 00:15:40.463 "data_size": 65536 00:15:40.463 } 00:15:40.463 ] 00:15:40.463 }' 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.463 07:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.723 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.723 07:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.723 07:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.723 [2024-11-29 07:47:30.590699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.723 [2024-11-29 07:47:30.607651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:40.723 07:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.723 07:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:40.723 [2024-11-29 07:47:30.614430] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.104 "name": "raid_bdev1", 00:15:42.104 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:42.104 "strip_size_kb": 64, 00:15:42.104 "state": "online", 00:15:42.104 "raid_level": "raid5f", 00:15:42.104 "superblock": false, 00:15:42.104 "num_base_bdevs": 3, 00:15:42.104 "num_base_bdevs_discovered": 3, 00:15:42.104 "num_base_bdevs_operational": 3, 00:15:42.104 "process": { 00:15:42.104 "type": "rebuild", 00:15:42.104 "target": "spare", 00:15:42.104 "progress": { 00:15:42.104 "blocks": 20480, 00:15:42.104 "percent": 15 00:15:42.104 } 00:15:42.104 }, 00:15:42.104 "base_bdevs_list": [ 00:15:42.104 { 00:15:42.104 "name": "spare", 00:15:42.104 "uuid": "92bd455c-fa9f-5885-8905-88c13b058583", 00:15:42.104 "is_configured": true, 00:15:42.104 "data_offset": 0, 00:15:42.104 "data_size": 65536 00:15:42.104 }, 00:15:42.104 { 00:15:42.104 "name": "BaseBdev2", 00:15:42.104 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:42.104 "is_configured": true, 00:15:42.104 "data_offset": 0, 00:15:42.104 "data_size": 65536 00:15:42.104 }, 00:15:42.104 { 00:15:42.104 "name": "BaseBdev3", 00:15:42.104 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:42.104 "is_configured": true, 00:15:42.104 "data_offset": 0, 00:15:42.104 "data_size": 65536 00:15:42.104 } 00:15:42.104 ] 00:15:42.104 }' 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.104 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.105 [2024-11-29 07:47:31.749937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.105 [2024-11-29 07:47:31.822022] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.105 [2024-11-29 07:47:31.822078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.105 [2024-11-29 07:47:31.822096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.105 [2024-11-29 07:47:31.822114] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.105 "name": "raid_bdev1", 00:15:42.105 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:42.105 "strip_size_kb": 64, 00:15:42.105 "state": "online", 00:15:42.105 "raid_level": "raid5f", 00:15:42.105 "superblock": false, 00:15:42.105 "num_base_bdevs": 3, 00:15:42.105 "num_base_bdevs_discovered": 2, 00:15:42.105 "num_base_bdevs_operational": 2, 00:15:42.105 "base_bdevs_list": [ 00:15:42.105 { 00:15:42.105 "name": null, 00:15:42.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.105 "is_configured": false, 00:15:42.105 "data_offset": 0, 00:15:42.105 "data_size": 65536 00:15:42.105 }, 00:15:42.105 { 00:15:42.105 "name": "BaseBdev2", 00:15:42.105 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:42.105 "is_configured": true, 00:15:42.105 "data_offset": 0, 00:15:42.105 "data_size": 65536 00:15:42.105 }, 00:15:42.105 { 00:15:42.105 "name": "BaseBdev3", 00:15:42.105 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:42.105 "is_configured": true, 00:15:42.105 "data_offset": 0, 00:15:42.105 "data_size": 65536 00:15:42.105 } 00:15:42.105 ] 00:15:42.105 }' 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.105 07:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.365 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.365 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.365 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.365 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.365 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.365 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.365 07:47:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.365 07:47:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.365 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.625 07:47:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.625 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.625 "name": "raid_bdev1", 00:15:42.625 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:42.625 "strip_size_kb": 64, 00:15:42.625 "state": "online", 00:15:42.625 "raid_level": "raid5f", 00:15:42.625 "superblock": false, 00:15:42.625 "num_base_bdevs": 3, 00:15:42.625 "num_base_bdevs_discovered": 2, 00:15:42.625 "num_base_bdevs_operational": 2, 00:15:42.625 "base_bdevs_list": [ 00:15:42.625 { 00:15:42.625 "name": null, 00:15:42.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.625 "is_configured": false, 00:15:42.625 "data_offset": 0, 00:15:42.625 "data_size": 65536 00:15:42.625 }, 00:15:42.625 { 00:15:42.625 "name": "BaseBdev2", 00:15:42.625 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:42.625 "is_configured": true, 00:15:42.625 "data_offset": 0, 00:15:42.625 "data_size": 65536 00:15:42.625 }, 00:15:42.625 { 00:15:42.625 "name": "BaseBdev3", 00:15:42.625 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:42.625 "is_configured": true, 00:15:42.625 "data_offset": 0, 00:15:42.625 "data_size": 65536 00:15:42.625 } 00:15:42.625 ] 00:15:42.625 }' 00:15:42.625 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.625 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.625 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.625 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.625 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.625 07:47:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.625 07:47:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.625 [2024-11-29 07:47:32.430184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.625 [2024-11-29 07:47:32.445037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:42.625 07:47:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.625 07:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:42.625 [2024-11-29 07:47:32.452260] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.564 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.564 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.564 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.564 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.564 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.564 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.564 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.565 07:47:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.565 07:47:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.565 07:47:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.565 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.565 "name": "raid_bdev1", 00:15:43.565 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:43.565 "strip_size_kb": 64, 00:15:43.565 "state": "online", 00:15:43.565 "raid_level": "raid5f", 00:15:43.565 "superblock": false, 00:15:43.565 "num_base_bdevs": 3, 00:15:43.565 "num_base_bdevs_discovered": 3, 00:15:43.565 "num_base_bdevs_operational": 3, 00:15:43.565 "process": { 00:15:43.565 "type": "rebuild", 00:15:43.565 "target": "spare", 00:15:43.565 "progress": { 00:15:43.565 "blocks": 20480, 00:15:43.565 "percent": 15 00:15:43.565 } 00:15:43.565 }, 00:15:43.565 "base_bdevs_list": [ 00:15:43.565 { 00:15:43.565 "name": "spare", 00:15:43.565 "uuid": "92bd455c-fa9f-5885-8905-88c13b058583", 00:15:43.565 "is_configured": true, 00:15:43.565 "data_offset": 0, 00:15:43.565 "data_size": 65536 00:15:43.565 }, 00:15:43.565 { 00:15:43.565 "name": "BaseBdev2", 00:15:43.565 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:43.565 "is_configured": true, 00:15:43.565 "data_offset": 0, 00:15:43.565 "data_size": 65536 00:15:43.565 }, 00:15:43.565 { 00:15:43.565 "name": "BaseBdev3", 00:15:43.565 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:43.565 "is_configured": true, 00:15:43.565 "data_offset": 0, 00:15:43.565 "data_size": 65536 00:15:43.565 } 00:15:43.565 ] 00:15:43.565 }' 00:15:43.565 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=532 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.825 "name": "raid_bdev1", 00:15:43.825 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:43.825 "strip_size_kb": 64, 00:15:43.825 "state": "online", 00:15:43.825 "raid_level": "raid5f", 00:15:43.825 "superblock": false, 00:15:43.825 "num_base_bdevs": 3, 00:15:43.825 "num_base_bdevs_discovered": 3, 00:15:43.825 "num_base_bdevs_operational": 3, 00:15:43.825 "process": { 00:15:43.825 "type": "rebuild", 00:15:43.825 "target": "spare", 00:15:43.825 "progress": { 00:15:43.825 "blocks": 22528, 00:15:43.825 "percent": 17 00:15:43.825 } 00:15:43.825 }, 00:15:43.825 "base_bdevs_list": [ 00:15:43.825 { 00:15:43.825 "name": "spare", 00:15:43.825 "uuid": "92bd455c-fa9f-5885-8905-88c13b058583", 00:15:43.825 "is_configured": true, 00:15:43.825 "data_offset": 0, 00:15:43.825 "data_size": 65536 00:15:43.825 }, 00:15:43.825 { 00:15:43.825 "name": "BaseBdev2", 00:15:43.825 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:43.825 "is_configured": true, 00:15:43.825 "data_offset": 0, 00:15:43.825 "data_size": 65536 00:15:43.825 }, 00:15:43.825 { 00:15:43.825 "name": "BaseBdev3", 00:15:43.825 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:43.825 "is_configured": true, 00:15:43.825 "data_offset": 0, 00:15:43.825 "data_size": 65536 00:15:43.825 } 00:15:43.825 ] 00:15:43.825 }' 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.825 07:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.765 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.765 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.765 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.025 "name": "raid_bdev1", 00:15:45.025 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:45.025 "strip_size_kb": 64, 00:15:45.025 "state": "online", 00:15:45.025 "raid_level": "raid5f", 00:15:45.025 "superblock": false, 00:15:45.025 "num_base_bdevs": 3, 00:15:45.025 "num_base_bdevs_discovered": 3, 00:15:45.025 "num_base_bdevs_operational": 3, 00:15:45.025 "process": { 00:15:45.025 "type": "rebuild", 00:15:45.025 "target": "spare", 00:15:45.025 "progress": { 00:15:45.025 "blocks": 45056, 00:15:45.025 "percent": 34 00:15:45.025 } 00:15:45.025 }, 00:15:45.025 "base_bdevs_list": [ 00:15:45.025 { 00:15:45.025 "name": "spare", 00:15:45.025 "uuid": "92bd455c-fa9f-5885-8905-88c13b058583", 00:15:45.025 "is_configured": true, 00:15:45.025 "data_offset": 0, 00:15:45.025 "data_size": 65536 00:15:45.025 }, 00:15:45.025 { 00:15:45.025 "name": "BaseBdev2", 00:15:45.025 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:45.025 "is_configured": true, 00:15:45.025 "data_offset": 0, 00:15:45.025 "data_size": 65536 00:15:45.025 }, 00:15:45.025 { 00:15:45.025 "name": "BaseBdev3", 00:15:45.025 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:45.025 "is_configured": true, 00:15:45.025 "data_offset": 0, 00:15:45.025 "data_size": 65536 00:15:45.025 } 00:15:45.025 ] 00:15:45.025 }' 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.025 07:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.980 07:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.980 07:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.980 07:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.980 07:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.980 07:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.980 07:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.980 07:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.980 07:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.980 07:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.980 07:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.980 07:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.980 07:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.980 "name": "raid_bdev1", 00:15:45.980 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:45.980 "strip_size_kb": 64, 00:15:45.981 "state": "online", 00:15:45.981 "raid_level": "raid5f", 00:15:45.981 "superblock": false, 00:15:45.981 "num_base_bdevs": 3, 00:15:45.981 "num_base_bdevs_discovered": 3, 00:15:45.981 "num_base_bdevs_operational": 3, 00:15:45.981 "process": { 00:15:45.981 "type": "rebuild", 00:15:45.981 "target": "spare", 00:15:45.981 "progress": { 00:15:45.981 "blocks": 67584, 00:15:45.981 "percent": 51 00:15:45.981 } 00:15:45.981 }, 00:15:45.981 "base_bdevs_list": [ 00:15:45.981 { 00:15:45.981 "name": "spare", 00:15:45.981 "uuid": "92bd455c-fa9f-5885-8905-88c13b058583", 00:15:45.981 "is_configured": true, 00:15:45.981 "data_offset": 0, 00:15:45.981 "data_size": 65536 00:15:45.981 }, 00:15:45.981 { 00:15:45.981 "name": "BaseBdev2", 00:15:45.981 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:45.981 "is_configured": true, 00:15:45.981 "data_offset": 0, 00:15:45.981 "data_size": 65536 00:15:45.981 }, 00:15:45.981 { 00:15:45.981 "name": "BaseBdev3", 00:15:45.981 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:45.981 "is_configured": true, 00:15:45.981 "data_offset": 0, 00:15:45.981 "data_size": 65536 00:15:45.981 } 00:15:45.981 ] 00:15:45.981 }' 00:15:45.981 07:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.243 07:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.243 07:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.243 07:47:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.243 07:47:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.182 "name": "raid_bdev1", 00:15:47.182 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:47.182 "strip_size_kb": 64, 00:15:47.182 "state": "online", 00:15:47.182 "raid_level": "raid5f", 00:15:47.182 "superblock": false, 00:15:47.182 "num_base_bdevs": 3, 00:15:47.182 "num_base_bdevs_discovered": 3, 00:15:47.182 "num_base_bdevs_operational": 3, 00:15:47.182 "process": { 00:15:47.182 "type": "rebuild", 00:15:47.182 "target": "spare", 00:15:47.182 "progress": { 00:15:47.182 "blocks": 92160, 00:15:47.182 "percent": 70 00:15:47.182 } 00:15:47.182 }, 00:15:47.182 "base_bdevs_list": [ 00:15:47.182 { 00:15:47.182 "name": "spare", 00:15:47.182 "uuid": "92bd455c-fa9f-5885-8905-88c13b058583", 00:15:47.182 "is_configured": true, 00:15:47.182 "data_offset": 0, 00:15:47.182 "data_size": 65536 00:15:47.182 }, 00:15:47.182 { 00:15:47.182 "name": "BaseBdev2", 00:15:47.182 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:47.182 "is_configured": true, 00:15:47.182 "data_offset": 0, 00:15:47.182 "data_size": 65536 00:15:47.182 }, 00:15:47.182 { 00:15:47.182 "name": "BaseBdev3", 00:15:47.182 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:47.182 "is_configured": true, 00:15:47.182 "data_offset": 0, 00:15:47.182 "data_size": 65536 00:15:47.182 } 00:15:47.182 ] 00:15:47.182 }' 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.182 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.442 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.442 07:47:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.382 "name": "raid_bdev1", 00:15:48.382 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:48.382 "strip_size_kb": 64, 00:15:48.382 "state": "online", 00:15:48.382 "raid_level": "raid5f", 00:15:48.382 "superblock": false, 00:15:48.382 "num_base_bdevs": 3, 00:15:48.382 "num_base_bdevs_discovered": 3, 00:15:48.382 "num_base_bdevs_operational": 3, 00:15:48.382 "process": { 00:15:48.382 "type": "rebuild", 00:15:48.382 "target": "spare", 00:15:48.382 "progress": { 00:15:48.382 "blocks": 114688, 00:15:48.382 "percent": 87 00:15:48.382 } 00:15:48.382 }, 00:15:48.382 "base_bdevs_list": [ 00:15:48.382 { 00:15:48.382 "name": "spare", 00:15:48.382 "uuid": "92bd455c-fa9f-5885-8905-88c13b058583", 00:15:48.382 "is_configured": true, 00:15:48.382 "data_offset": 0, 00:15:48.382 "data_size": 65536 00:15:48.382 }, 00:15:48.382 { 00:15:48.382 "name": "BaseBdev2", 00:15:48.382 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:48.382 "is_configured": true, 00:15:48.382 "data_offset": 0, 00:15:48.382 "data_size": 65536 00:15:48.382 }, 00:15:48.382 { 00:15:48.382 "name": "BaseBdev3", 00:15:48.382 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:48.382 "is_configured": true, 00:15:48.382 "data_offset": 0, 00:15:48.382 "data_size": 65536 00:15:48.382 } 00:15:48.382 ] 00:15:48.382 }' 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.382 07:47:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.951 [2024-11-29 07:47:38.890826] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:48.951 [2024-11-29 07:47:38.890899] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:48.951 [2024-11-29 07:47:38.890939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.542 "name": "raid_bdev1", 00:15:49.542 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:49.542 "strip_size_kb": 64, 00:15:49.542 "state": "online", 00:15:49.542 "raid_level": "raid5f", 00:15:49.542 "superblock": false, 00:15:49.542 "num_base_bdevs": 3, 00:15:49.542 "num_base_bdevs_discovered": 3, 00:15:49.542 "num_base_bdevs_operational": 3, 00:15:49.542 "base_bdevs_list": [ 00:15:49.542 { 00:15:49.542 "name": "spare", 00:15:49.542 "uuid": "92bd455c-fa9f-5885-8905-88c13b058583", 00:15:49.542 "is_configured": true, 00:15:49.542 "data_offset": 0, 00:15:49.542 "data_size": 65536 00:15:49.542 }, 00:15:49.542 { 00:15:49.542 "name": "BaseBdev2", 00:15:49.542 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:49.542 "is_configured": true, 00:15:49.542 "data_offset": 0, 00:15:49.542 "data_size": 65536 00:15:49.542 }, 00:15:49.542 { 00:15:49.542 "name": "BaseBdev3", 00:15:49.542 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:49.542 "is_configured": true, 00:15:49.542 "data_offset": 0, 00:15:49.542 "data_size": 65536 00:15:49.542 } 00:15:49.542 ] 00:15:49.542 }' 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.542 "name": "raid_bdev1", 00:15:49.542 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:49.542 "strip_size_kb": 64, 00:15:49.542 "state": "online", 00:15:49.542 "raid_level": "raid5f", 00:15:49.542 "superblock": false, 00:15:49.542 "num_base_bdevs": 3, 00:15:49.542 "num_base_bdevs_discovered": 3, 00:15:49.542 "num_base_bdevs_operational": 3, 00:15:49.542 "base_bdevs_list": [ 00:15:49.542 { 00:15:49.542 "name": "spare", 00:15:49.542 "uuid": "92bd455c-fa9f-5885-8905-88c13b058583", 00:15:49.542 "is_configured": true, 00:15:49.542 "data_offset": 0, 00:15:49.542 "data_size": 65536 00:15:49.542 }, 00:15:49.542 { 00:15:49.542 "name": "BaseBdev2", 00:15:49.542 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:49.542 "is_configured": true, 00:15:49.542 "data_offset": 0, 00:15:49.542 "data_size": 65536 00:15:49.542 }, 00:15:49.542 { 00:15:49.542 "name": "BaseBdev3", 00:15:49.542 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:49.542 "is_configured": true, 00:15:49.542 "data_offset": 0, 00:15:49.542 "data_size": 65536 00:15:49.542 } 00:15:49.542 ] 00:15:49.542 }' 00:15:49.542 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.802 "name": "raid_bdev1", 00:15:49.802 "uuid": "2903ff7c-45de-48ae-9850-4d7724f76842", 00:15:49.802 "strip_size_kb": 64, 00:15:49.802 "state": "online", 00:15:49.802 "raid_level": "raid5f", 00:15:49.802 "superblock": false, 00:15:49.802 "num_base_bdevs": 3, 00:15:49.802 "num_base_bdevs_discovered": 3, 00:15:49.802 "num_base_bdevs_operational": 3, 00:15:49.802 "base_bdevs_list": [ 00:15:49.802 { 00:15:49.802 "name": "spare", 00:15:49.802 "uuid": "92bd455c-fa9f-5885-8905-88c13b058583", 00:15:49.802 "is_configured": true, 00:15:49.802 "data_offset": 0, 00:15:49.802 "data_size": 65536 00:15:49.802 }, 00:15:49.802 { 00:15:49.802 "name": "BaseBdev2", 00:15:49.802 "uuid": "da36db6f-0541-54fd-a3cc-e956a1f1e9e1", 00:15:49.802 "is_configured": true, 00:15:49.802 "data_offset": 0, 00:15:49.802 "data_size": 65536 00:15:49.802 }, 00:15:49.802 { 00:15:49.802 "name": "BaseBdev3", 00:15:49.802 "uuid": "d69dcf3e-6024-5920-ab71-d7ef167c9862", 00:15:49.802 "is_configured": true, 00:15:49.802 "data_offset": 0, 00:15:49.802 "data_size": 65536 00:15:49.802 } 00:15:49.802 ] 00:15:49.802 }' 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.802 07:47:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.372 [2024-11-29 07:47:40.051519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.372 [2024-11-29 07:47:40.051549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.372 [2024-11-29 07:47:40.051634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.372 [2024-11-29 07:47:40.051716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.372 [2024-11-29 07:47:40.051730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:50.372 /dev/nbd0 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.372 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.637 1+0 records in 00:15:50.637 1+0 records out 00:15:50.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262061 s, 15.6 MB/s 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:50.637 /dev/nbd1 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.637 1+0 records in 00:15:50.637 1+0 records out 00:15:50.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024099 s, 17.0 MB/s 00:15:50.637 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.903 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:51.163 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.163 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.163 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.163 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.163 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.163 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.163 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:51.163 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.163 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.163 07:47:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81271 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81271 ']' 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81271 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81271 00:15:51.421 killing process with pid 81271 00:15:51.421 Received shutdown signal, test time was about 60.000000 seconds 00:15:51.421 00:15:51.421 Latency(us) 00:15:51.421 [2024-11-29T07:47:41.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.421 [2024-11-29T07:47:41.366Z] =================================================================================================================== 00:15:51.421 [2024-11-29T07:47:41.366Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81271' 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81271 00:15:51.421 [2024-11-29 07:47:41.243481] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.421 07:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81271 00:15:51.680 [2024-11-29 07:47:41.616195] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:53.060 00:15:53.060 real 0m15.003s 00:15:53.060 user 0m18.390s 00:15:53.060 sys 0m1.986s 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.060 ************************************ 00:15:53.060 END TEST raid5f_rebuild_test 00:15:53.060 ************************************ 00:15:53.060 07:47:42 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:53.060 07:47:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:53.060 07:47:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.060 07:47:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.060 ************************************ 00:15:53.060 START TEST raid5f_rebuild_test_sb 00:15:53.060 ************************************ 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81711 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81711 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81711 ']' 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.060 07:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.060 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:53.060 Zero copy mechanism will not be used. 00:15:53.060 [2024-11-29 07:47:42.830383] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:53.060 [2024-11-29 07:47:42.830490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81711 ] 00:15:53.320 [2024-11-29 07:47:43.008548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.320 [2024-11-29 07:47:43.112361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.579 [2024-11-29 07:47:43.306342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.579 [2024-11-29 07:47:43.306384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.839 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.839 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:53.839 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:53.839 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:53.839 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.839 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.839 BaseBdev1_malloc 00:15:53.839 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.839 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:53.839 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.839 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.839 [2024-11-29 07:47:43.677821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:53.839 [2024-11-29 07:47:43.677881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.839 [2024-11-29 07:47:43.677905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:53.839 [2024-11-29 07:47:43.677916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.839 [2024-11-29 07:47:43.680034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.839 [2024-11-29 07:47:43.680071] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:53.839 BaseBdev1 00:15:53.839 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.839 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:53.840 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:53.840 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.840 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.840 BaseBdev2_malloc 00:15:53.840 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.840 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:53.840 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.840 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.840 [2024-11-29 07:47:43.733291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:53.840 [2024-11-29 07:47:43.733347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.840 [2024-11-29 07:47:43.733369] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:53.840 [2024-11-29 07:47:43.733381] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.840 [2024-11-29 07:47:43.735391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.840 [2024-11-29 07:47:43.735426] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:53.840 BaseBdev2 00:15:53.840 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.840 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:53.840 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:53.840 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.840 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.100 BaseBdev3_malloc 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.100 [2024-11-29 07:47:43.798019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:54.100 [2024-11-29 07:47:43.798083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.100 [2024-11-29 07:47:43.798105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:54.100 [2024-11-29 07:47:43.798125] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.100 [2024-11-29 07:47:43.800153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.100 [2024-11-29 07:47:43.800188] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:54.100 BaseBdev3 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.100 spare_malloc 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.100 spare_delay 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.100 [2024-11-29 07:47:43.864764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:54.100 [2024-11-29 07:47:43.864813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.100 [2024-11-29 07:47:43.864828] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:54.100 [2024-11-29 07:47:43.864839] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.100 [2024-11-29 07:47:43.866800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.100 [2024-11-29 07:47:43.866840] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:54.100 spare 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.100 [2024-11-29 07:47:43.876842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.100 [2024-11-29 07:47:43.878551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.100 [2024-11-29 07:47:43.878630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.100 [2024-11-29 07:47:43.878805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:54.100 [2024-11-29 07:47:43.878821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:54.100 [2024-11-29 07:47:43.879062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:54.100 [2024-11-29 07:47:43.884689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:54.100 [2024-11-29 07:47:43.884717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:54.100 [2024-11-29 07:47:43.884887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.100 "name": "raid_bdev1", 00:15:54.100 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:15:54.100 "strip_size_kb": 64, 00:15:54.100 "state": "online", 00:15:54.100 "raid_level": "raid5f", 00:15:54.100 "superblock": true, 00:15:54.100 "num_base_bdevs": 3, 00:15:54.100 "num_base_bdevs_discovered": 3, 00:15:54.100 "num_base_bdevs_operational": 3, 00:15:54.100 "base_bdevs_list": [ 00:15:54.100 { 00:15:54.100 "name": "BaseBdev1", 00:15:54.100 "uuid": "b72e1e52-1254-511e-adcb-18e2f2333c62", 00:15:54.100 "is_configured": true, 00:15:54.100 "data_offset": 2048, 00:15:54.100 "data_size": 63488 00:15:54.100 }, 00:15:54.100 { 00:15:54.100 "name": "BaseBdev2", 00:15:54.100 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:15:54.100 "is_configured": true, 00:15:54.100 "data_offset": 2048, 00:15:54.100 "data_size": 63488 00:15:54.100 }, 00:15:54.100 { 00:15:54.100 "name": "BaseBdev3", 00:15:54.100 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:15:54.100 "is_configured": true, 00:15:54.100 "data_offset": 2048, 00:15:54.100 "data_size": 63488 00:15:54.100 } 00:15:54.100 ] 00:15:54.100 }' 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.100 07:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.669 [2024-11-29 07:47:44.334575] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:54.669 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:54.669 [2024-11-29 07:47:44.582026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:54.669 /dev/nbd0 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.929 1+0 records in 00:15:54.929 1+0 records out 00:15:54.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038736 s, 10.6 MB/s 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:54.929 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:55.189 496+0 records in 00:15:55.189 496+0 records out 00:15:55.189 65011712 bytes (65 MB, 62 MiB) copied, 0.305146 s, 213 MB/s 00:15:55.189 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:55.189 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.189 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:55.189 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:55.189 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:55.189 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.189 07:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:55.446 [2024-11-29 07:47:45.171718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.446 [2024-11-29 07:47:45.186500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.446 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.447 "name": "raid_bdev1", 00:15:55.447 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:15:55.447 "strip_size_kb": 64, 00:15:55.447 "state": "online", 00:15:55.447 "raid_level": "raid5f", 00:15:55.447 "superblock": true, 00:15:55.447 "num_base_bdevs": 3, 00:15:55.447 "num_base_bdevs_discovered": 2, 00:15:55.447 "num_base_bdevs_operational": 2, 00:15:55.447 "base_bdevs_list": [ 00:15:55.447 { 00:15:55.447 "name": null, 00:15:55.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.447 "is_configured": false, 00:15:55.447 "data_offset": 0, 00:15:55.447 "data_size": 63488 00:15:55.447 }, 00:15:55.447 { 00:15:55.447 "name": "BaseBdev2", 00:15:55.447 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:15:55.447 "is_configured": true, 00:15:55.447 "data_offset": 2048, 00:15:55.447 "data_size": 63488 00:15:55.447 }, 00:15:55.447 { 00:15:55.447 "name": "BaseBdev3", 00:15:55.447 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:15:55.447 "is_configured": true, 00:15:55.447 "data_offset": 2048, 00:15:55.447 "data_size": 63488 00:15:55.447 } 00:15:55.447 ] 00:15:55.447 }' 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.447 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.705 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:55.705 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.705 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.705 [2024-11-29 07:47:45.585785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.705 [2024-11-29 07:47:45.601574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:55.705 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.705 07:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:55.705 [2024-11-29 07:47:45.608539] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:57.085 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.085 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.085 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.085 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.085 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.085 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.085 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.085 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.085 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.085 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.086 "name": "raid_bdev1", 00:15:57.086 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:15:57.086 "strip_size_kb": 64, 00:15:57.086 "state": "online", 00:15:57.086 "raid_level": "raid5f", 00:15:57.086 "superblock": true, 00:15:57.086 "num_base_bdevs": 3, 00:15:57.086 "num_base_bdevs_discovered": 3, 00:15:57.086 "num_base_bdevs_operational": 3, 00:15:57.086 "process": { 00:15:57.086 "type": "rebuild", 00:15:57.086 "target": "spare", 00:15:57.086 "progress": { 00:15:57.086 "blocks": 20480, 00:15:57.086 "percent": 16 00:15:57.086 } 00:15:57.086 }, 00:15:57.086 "base_bdevs_list": [ 00:15:57.086 { 00:15:57.086 "name": "spare", 00:15:57.086 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:15:57.086 "is_configured": true, 00:15:57.086 "data_offset": 2048, 00:15:57.086 "data_size": 63488 00:15:57.086 }, 00:15:57.086 { 00:15:57.086 "name": "BaseBdev2", 00:15:57.086 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:15:57.086 "is_configured": true, 00:15:57.086 "data_offset": 2048, 00:15:57.086 "data_size": 63488 00:15:57.086 }, 00:15:57.086 { 00:15:57.086 "name": "BaseBdev3", 00:15:57.086 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:15:57.086 "is_configured": true, 00:15:57.086 "data_offset": 2048, 00:15:57.086 "data_size": 63488 00:15:57.086 } 00:15:57.086 ] 00:15:57.086 }' 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.086 [2024-11-29 07:47:46.771567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.086 [2024-11-29 07:47:46.816241] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:57.086 [2024-11-29 07:47:46.816289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.086 [2024-11-29 07:47:46.816306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.086 [2024-11-29 07:47:46.816313] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.086 "name": "raid_bdev1", 00:15:57.086 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:15:57.086 "strip_size_kb": 64, 00:15:57.086 "state": "online", 00:15:57.086 "raid_level": "raid5f", 00:15:57.086 "superblock": true, 00:15:57.086 "num_base_bdevs": 3, 00:15:57.086 "num_base_bdevs_discovered": 2, 00:15:57.086 "num_base_bdevs_operational": 2, 00:15:57.086 "base_bdevs_list": [ 00:15:57.086 { 00:15:57.086 "name": null, 00:15:57.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.086 "is_configured": false, 00:15:57.086 "data_offset": 0, 00:15:57.086 "data_size": 63488 00:15:57.086 }, 00:15:57.086 { 00:15:57.086 "name": "BaseBdev2", 00:15:57.086 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:15:57.086 "is_configured": true, 00:15:57.086 "data_offset": 2048, 00:15:57.086 "data_size": 63488 00:15:57.086 }, 00:15:57.086 { 00:15:57.086 "name": "BaseBdev3", 00:15:57.086 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:15:57.086 "is_configured": true, 00:15:57.086 "data_offset": 2048, 00:15:57.086 "data_size": 63488 00:15:57.086 } 00:15:57.086 ] 00:15:57.086 }' 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.086 07:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.655 "name": "raid_bdev1", 00:15:57.655 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:15:57.655 "strip_size_kb": 64, 00:15:57.655 "state": "online", 00:15:57.655 "raid_level": "raid5f", 00:15:57.655 "superblock": true, 00:15:57.655 "num_base_bdevs": 3, 00:15:57.655 "num_base_bdevs_discovered": 2, 00:15:57.655 "num_base_bdevs_operational": 2, 00:15:57.655 "base_bdevs_list": [ 00:15:57.655 { 00:15:57.655 "name": null, 00:15:57.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.655 "is_configured": false, 00:15:57.655 "data_offset": 0, 00:15:57.655 "data_size": 63488 00:15:57.655 }, 00:15:57.655 { 00:15:57.655 "name": "BaseBdev2", 00:15:57.655 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:15:57.655 "is_configured": true, 00:15:57.655 "data_offset": 2048, 00:15:57.655 "data_size": 63488 00:15:57.655 }, 00:15:57.655 { 00:15:57.655 "name": "BaseBdev3", 00:15:57.655 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:15:57.655 "is_configured": true, 00:15:57.655 "data_offset": 2048, 00:15:57.655 "data_size": 63488 00:15:57.655 } 00:15:57.655 ] 00:15:57.655 }' 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.655 [2024-11-29 07:47:47.428270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.655 [2024-11-29 07:47:47.443691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.655 07:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:57.655 [2024-11-29 07:47:47.450684] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.593 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.593 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.593 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.593 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.593 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.593 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.593 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.593 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.593 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.593 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.593 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.593 "name": "raid_bdev1", 00:15:58.593 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:15:58.593 "strip_size_kb": 64, 00:15:58.593 "state": "online", 00:15:58.593 "raid_level": "raid5f", 00:15:58.593 "superblock": true, 00:15:58.593 "num_base_bdevs": 3, 00:15:58.593 "num_base_bdevs_discovered": 3, 00:15:58.593 "num_base_bdevs_operational": 3, 00:15:58.593 "process": { 00:15:58.593 "type": "rebuild", 00:15:58.593 "target": "spare", 00:15:58.593 "progress": { 00:15:58.593 "blocks": 20480, 00:15:58.593 "percent": 16 00:15:58.593 } 00:15:58.593 }, 00:15:58.593 "base_bdevs_list": [ 00:15:58.593 { 00:15:58.593 "name": "spare", 00:15:58.593 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:15:58.593 "is_configured": true, 00:15:58.593 "data_offset": 2048, 00:15:58.593 "data_size": 63488 00:15:58.593 }, 00:15:58.593 { 00:15:58.593 "name": "BaseBdev2", 00:15:58.593 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:15:58.593 "is_configured": true, 00:15:58.593 "data_offset": 2048, 00:15:58.593 "data_size": 63488 00:15:58.593 }, 00:15:58.593 { 00:15:58.593 "name": "BaseBdev3", 00:15:58.593 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:15:58.593 "is_configured": true, 00:15:58.593 "data_offset": 2048, 00:15:58.593 "data_size": 63488 00:15:58.593 } 00:15:58.593 ] 00:15:58.593 }' 00:15:58.593 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:58.853 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=547 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.853 "name": "raid_bdev1", 00:15:58.853 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:15:58.853 "strip_size_kb": 64, 00:15:58.853 "state": "online", 00:15:58.853 "raid_level": "raid5f", 00:15:58.853 "superblock": true, 00:15:58.853 "num_base_bdevs": 3, 00:15:58.853 "num_base_bdevs_discovered": 3, 00:15:58.853 "num_base_bdevs_operational": 3, 00:15:58.853 "process": { 00:15:58.853 "type": "rebuild", 00:15:58.853 "target": "spare", 00:15:58.853 "progress": { 00:15:58.853 "blocks": 22528, 00:15:58.853 "percent": 17 00:15:58.853 } 00:15:58.853 }, 00:15:58.853 "base_bdevs_list": [ 00:15:58.853 { 00:15:58.853 "name": "spare", 00:15:58.853 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:15:58.853 "is_configured": true, 00:15:58.853 "data_offset": 2048, 00:15:58.853 "data_size": 63488 00:15:58.853 }, 00:15:58.853 { 00:15:58.853 "name": "BaseBdev2", 00:15:58.853 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:15:58.853 "is_configured": true, 00:15:58.853 "data_offset": 2048, 00:15:58.853 "data_size": 63488 00:15:58.853 }, 00:15:58.853 { 00:15:58.853 "name": "BaseBdev3", 00:15:58.853 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:15:58.853 "is_configured": true, 00:15:58.853 "data_offset": 2048, 00:15:58.853 "data_size": 63488 00:15:58.853 } 00:15:58.853 ] 00:15:58.853 }' 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.853 07:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:59.790 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.790 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.790 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.790 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.790 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.790 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.048 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.048 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.048 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.048 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.048 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.048 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.048 "name": "raid_bdev1", 00:16:00.048 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:00.048 "strip_size_kb": 64, 00:16:00.048 "state": "online", 00:16:00.048 "raid_level": "raid5f", 00:16:00.048 "superblock": true, 00:16:00.048 "num_base_bdevs": 3, 00:16:00.048 "num_base_bdevs_discovered": 3, 00:16:00.048 "num_base_bdevs_operational": 3, 00:16:00.048 "process": { 00:16:00.048 "type": "rebuild", 00:16:00.048 "target": "spare", 00:16:00.048 "progress": { 00:16:00.048 "blocks": 45056, 00:16:00.048 "percent": 35 00:16:00.048 } 00:16:00.048 }, 00:16:00.048 "base_bdevs_list": [ 00:16:00.048 { 00:16:00.048 "name": "spare", 00:16:00.048 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:16:00.048 "is_configured": true, 00:16:00.048 "data_offset": 2048, 00:16:00.048 "data_size": 63488 00:16:00.048 }, 00:16:00.048 { 00:16:00.048 "name": "BaseBdev2", 00:16:00.048 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:00.048 "is_configured": true, 00:16:00.048 "data_offset": 2048, 00:16:00.048 "data_size": 63488 00:16:00.048 }, 00:16:00.048 { 00:16:00.048 "name": "BaseBdev3", 00:16:00.048 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:00.048 "is_configured": true, 00:16:00.048 "data_offset": 2048, 00:16:00.048 "data_size": 63488 00:16:00.048 } 00:16:00.048 ] 00:16:00.048 }' 00:16:00.048 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.048 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.048 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.048 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.048 07:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.984 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.984 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.984 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.984 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.984 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.984 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.984 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.984 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.984 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.984 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.984 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.984 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.984 "name": "raid_bdev1", 00:16:00.984 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:00.984 "strip_size_kb": 64, 00:16:00.984 "state": "online", 00:16:00.984 "raid_level": "raid5f", 00:16:00.984 "superblock": true, 00:16:00.984 "num_base_bdevs": 3, 00:16:00.984 "num_base_bdevs_discovered": 3, 00:16:00.984 "num_base_bdevs_operational": 3, 00:16:00.984 "process": { 00:16:00.984 "type": "rebuild", 00:16:00.984 "target": "spare", 00:16:00.984 "progress": { 00:16:00.984 "blocks": 69632, 00:16:00.984 "percent": 54 00:16:00.984 } 00:16:00.984 }, 00:16:00.984 "base_bdevs_list": [ 00:16:00.984 { 00:16:00.984 "name": "spare", 00:16:00.984 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:16:00.984 "is_configured": true, 00:16:00.984 "data_offset": 2048, 00:16:00.984 "data_size": 63488 00:16:00.984 }, 00:16:00.984 { 00:16:00.984 "name": "BaseBdev2", 00:16:00.984 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:00.984 "is_configured": true, 00:16:00.984 "data_offset": 2048, 00:16:00.984 "data_size": 63488 00:16:00.984 }, 00:16:00.984 { 00:16:00.984 "name": "BaseBdev3", 00:16:00.984 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:00.984 "is_configured": true, 00:16:00.984 "data_offset": 2048, 00:16:00.985 "data_size": 63488 00:16:00.985 } 00:16:00.985 ] 00:16:00.985 }' 00:16:01.243 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.243 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.243 07:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.243 07:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.243 07:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.180 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.180 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.180 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.180 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.180 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.180 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.181 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.181 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.181 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.181 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.181 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.181 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.181 "name": "raid_bdev1", 00:16:02.181 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:02.181 "strip_size_kb": 64, 00:16:02.181 "state": "online", 00:16:02.181 "raid_level": "raid5f", 00:16:02.181 "superblock": true, 00:16:02.181 "num_base_bdevs": 3, 00:16:02.181 "num_base_bdevs_discovered": 3, 00:16:02.181 "num_base_bdevs_operational": 3, 00:16:02.181 "process": { 00:16:02.181 "type": "rebuild", 00:16:02.181 "target": "spare", 00:16:02.181 "progress": { 00:16:02.181 "blocks": 92160, 00:16:02.181 "percent": 72 00:16:02.181 } 00:16:02.181 }, 00:16:02.181 "base_bdevs_list": [ 00:16:02.181 { 00:16:02.181 "name": "spare", 00:16:02.181 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:16:02.181 "is_configured": true, 00:16:02.181 "data_offset": 2048, 00:16:02.181 "data_size": 63488 00:16:02.181 }, 00:16:02.181 { 00:16:02.181 "name": "BaseBdev2", 00:16:02.181 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:02.181 "is_configured": true, 00:16:02.181 "data_offset": 2048, 00:16:02.181 "data_size": 63488 00:16:02.181 }, 00:16:02.181 { 00:16:02.181 "name": "BaseBdev3", 00:16:02.181 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:02.181 "is_configured": true, 00:16:02.181 "data_offset": 2048, 00:16:02.181 "data_size": 63488 00:16:02.181 } 00:16:02.181 ] 00:16:02.181 }' 00:16:02.181 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.181 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.181 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.440 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.440 07:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.377 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.377 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.377 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.377 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.377 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.377 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.377 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.377 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.377 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.377 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.377 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.377 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.377 "name": "raid_bdev1", 00:16:03.377 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:03.377 "strip_size_kb": 64, 00:16:03.377 "state": "online", 00:16:03.377 "raid_level": "raid5f", 00:16:03.377 "superblock": true, 00:16:03.377 "num_base_bdevs": 3, 00:16:03.377 "num_base_bdevs_discovered": 3, 00:16:03.377 "num_base_bdevs_operational": 3, 00:16:03.377 "process": { 00:16:03.377 "type": "rebuild", 00:16:03.377 "target": "spare", 00:16:03.377 "progress": { 00:16:03.377 "blocks": 114688, 00:16:03.377 "percent": 90 00:16:03.377 } 00:16:03.377 }, 00:16:03.377 "base_bdevs_list": [ 00:16:03.377 { 00:16:03.377 "name": "spare", 00:16:03.377 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:16:03.377 "is_configured": true, 00:16:03.377 "data_offset": 2048, 00:16:03.377 "data_size": 63488 00:16:03.377 }, 00:16:03.377 { 00:16:03.377 "name": "BaseBdev2", 00:16:03.377 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:03.377 "is_configured": true, 00:16:03.377 "data_offset": 2048, 00:16:03.377 "data_size": 63488 00:16:03.377 }, 00:16:03.377 { 00:16:03.377 "name": "BaseBdev3", 00:16:03.377 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:03.378 "is_configured": true, 00:16:03.378 "data_offset": 2048, 00:16:03.378 "data_size": 63488 00:16:03.378 } 00:16:03.378 ] 00:16:03.378 }' 00:16:03.378 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.378 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.378 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.378 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.378 07:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.965 [2024-11-29 07:47:53.687130] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:03.965 [2024-11-29 07:47:53.687214] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:03.965 [2024-11-29 07:47:53.687305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.533 "name": "raid_bdev1", 00:16:04.533 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:04.533 "strip_size_kb": 64, 00:16:04.533 "state": "online", 00:16:04.533 "raid_level": "raid5f", 00:16:04.533 "superblock": true, 00:16:04.533 "num_base_bdevs": 3, 00:16:04.533 "num_base_bdevs_discovered": 3, 00:16:04.533 "num_base_bdevs_operational": 3, 00:16:04.533 "base_bdevs_list": [ 00:16:04.533 { 00:16:04.533 "name": "spare", 00:16:04.533 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:16:04.533 "is_configured": true, 00:16:04.533 "data_offset": 2048, 00:16:04.533 "data_size": 63488 00:16:04.533 }, 00:16:04.533 { 00:16:04.533 "name": "BaseBdev2", 00:16:04.533 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:04.533 "is_configured": true, 00:16:04.533 "data_offset": 2048, 00:16:04.533 "data_size": 63488 00:16:04.533 }, 00:16:04.533 { 00:16:04.533 "name": "BaseBdev3", 00:16:04.533 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:04.533 "is_configured": true, 00:16:04.533 "data_offset": 2048, 00:16:04.533 "data_size": 63488 00:16:04.533 } 00:16:04.533 ] 00:16:04.533 }' 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.533 "name": "raid_bdev1", 00:16:04.533 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:04.533 "strip_size_kb": 64, 00:16:04.533 "state": "online", 00:16:04.533 "raid_level": "raid5f", 00:16:04.533 "superblock": true, 00:16:04.533 "num_base_bdevs": 3, 00:16:04.533 "num_base_bdevs_discovered": 3, 00:16:04.533 "num_base_bdevs_operational": 3, 00:16:04.533 "base_bdevs_list": [ 00:16:04.533 { 00:16:04.533 "name": "spare", 00:16:04.533 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:16:04.533 "is_configured": true, 00:16:04.533 "data_offset": 2048, 00:16:04.533 "data_size": 63488 00:16:04.533 }, 00:16:04.533 { 00:16:04.533 "name": "BaseBdev2", 00:16:04.533 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:04.533 "is_configured": true, 00:16:04.533 "data_offset": 2048, 00:16:04.533 "data_size": 63488 00:16:04.533 }, 00:16:04.533 { 00:16:04.533 "name": "BaseBdev3", 00:16:04.533 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:04.533 "is_configured": true, 00:16:04.533 "data_offset": 2048, 00:16:04.533 "data_size": 63488 00:16:04.533 } 00:16:04.533 ] 00:16:04.533 }' 00:16:04.533 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.792 "name": "raid_bdev1", 00:16:04.792 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:04.792 "strip_size_kb": 64, 00:16:04.792 "state": "online", 00:16:04.792 "raid_level": "raid5f", 00:16:04.792 "superblock": true, 00:16:04.792 "num_base_bdevs": 3, 00:16:04.792 "num_base_bdevs_discovered": 3, 00:16:04.792 "num_base_bdevs_operational": 3, 00:16:04.792 "base_bdevs_list": [ 00:16:04.792 { 00:16:04.792 "name": "spare", 00:16:04.792 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:16:04.792 "is_configured": true, 00:16:04.792 "data_offset": 2048, 00:16:04.792 "data_size": 63488 00:16:04.792 }, 00:16:04.792 { 00:16:04.792 "name": "BaseBdev2", 00:16:04.792 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:04.792 "is_configured": true, 00:16:04.792 "data_offset": 2048, 00:16:04.792 "data_size": 63488 00:16:04.792 }, 00:16:04.792 { 00:16:04.792 "name": "BaseBdev3", 00:16:04.792 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:04.792 "is_configured": true, 00:16:04.792 "data_offset": 2048, 00:16:04.792 "data_size": 63488 00:16:04.792 } 00:16:04.792 ] 00:16:04.792 }' 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.792 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.051 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.051 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.051 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.051 [2024-11-29 07:47:54.963290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.051 [2024-11-29 07:47:54.963322] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.051 [2024-11-29 07:47:54.963407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.051 [2024-11-29 07:47:54.963485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.051 [2024-11-29 07:47:54.963500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:05.051 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.051 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.051 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:05.051 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.051 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.051 07:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:05.309 /dev/nbd0 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.309 1+0 records in 00:16:05.309 1+0 records out 00:16:05.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434894 s, 9.4 MB/s 00:16:05.309 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:05.568 /dev/nbd1 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.568 1+0 records in 00:16:05.568 1+0 records out 00:16:05.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463814 s, 8.8 MB/s 00:16:05.568 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.828 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:06.087 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.087 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.087 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.087 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.087 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.087 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.087 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:06.087 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.087 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.087 07:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.345 [2024-11-29 07:47:56.121143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.345 [2024-11-29 07:47:56.121203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.345 [2024-11-29 07:47:56.121226] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:06.345 [2024-11-29 07:47:56.121237] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.345 [2024-11-29 07:47:56.123494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.345 [2024-11-29 07:47:56.123536] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.345 [2024-11-29 07:47:56.123629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:06.345 [2024-11-29 07:47:56.123685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.345 [2024-11-29 07:47:56.123832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.345 [2024-11-29 07:47:56.123978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.345 spare 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.345 [2024-11-29 07:47:56.223891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:06.345 [2024-11-29 07:47:56.223922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:06.345 [2024-11-29 07:47:56.224223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:06.345 [2024-11-29 07:47:56.229367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:06.345 [2024-11-29 07:47:56.229390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:06.345 [2024-11-29 07:47:56.229598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.345 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.604 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.604 "name": "raid_bdev1", 00:16:06.604 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:06.604 "strip_size_kb": 64, 00:16:06.604 "state": "online", 00:16:06.604 "raid_level": "raid5f", 00:16:06.604 "superblock": true, 00:16:06.604 "num_base_bdevs": 3, 00:16:06.604 "num_base_bdevs_discovered": 3, 00:16:06.604 "num_base_bdevs_operational": 3, 00:16:06.604 "base_bdevs_list": [ 00:16:06.604 { 00:16:06.604 "name": "spare", 00:16:06.604 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:16:06.604 "is_configured": true, 00:16:06.604 "data_offset": 2048, 00:16:06.604 "data_size": 63488 00:16:06.604 }, 00:16:06.604 { 00:16:06.604 "name": "BaseBdev2", 00:16:06.604 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:06.604 "is_configured": true, 00:16:06.604 "data_offset": 2048, 00:16:06.604 "data_size": 63488 00:16:06.604 }, 00:16:06.604 { 00:16:06.604 "name": "BaseBdev3", 00:16:06.604 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:06.604 "is_configured": true, 00:16:06.604 "data_offset": 2048, 00:16:06.604 "data_size": 63488 00:16:06.604 } 00:16:06.604 ] 00:16:06.604 }' 00:16:06.604 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.604 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.863 "name": "raid_bdev1", 00:16:06.863 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:06.863 "strip_size_kb": 64, 00:16:06.863 "state": "online", 00:16:06.863 "raid_level": "raid5f", 00:16:06.863 "superblock": true, 00:16:06.863 "num_base_bdevs": 3, 00:16:06.863 "num_base_bdevs_discovered": 3, 00:16:06.863 "num_base_bdevs_operational": 3, 00:16:06.863 "base_bdevs_list": [ 00:16:06.863 { 00:16:06.863 "name": "spare", 00:16:06.863 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:16:06.863 "is_configured": true, 00:16:06.863 "data_offset": 2048, 00:16:06.863 "data_size": 63488 00:16:06.863 }, 00:16:06.863 { 00:16:06.863 "name": "BaseBdev2", 00:16:06.863 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:06.863 "is_configured": true, 00:16:06.863 "data_offset": 2048, 00:16:06.863 "data_size": 63488 00:16:06.863 }, 00:16:06.863 { 00:16:06.863 "name": "BaseBdev3", 00:16:06.863 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:06.863 "is_configured": true, 00:16:06.863 "data_offset": 2048, 00:16:06.863 "data_size": 63488 00:16:06.863 } 00:16:06.863 ] 00:16:06.863 }' 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.863 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.122 [2024-11-29 07:47:56.875490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.122 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.123 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.123 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.123 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.123 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.123 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.123 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.123 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.123 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.123 "name": "raid_bdev1", 00:16:07.123 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:07.123 "strip_size_kb": 64, 00:16:07.123 "state": "online", 00:16:07.123 "raid_level": "raid5f", 00:16:07.123 "superblock": true, 00:16:07.123 "num_base_bdevs": 3, 00:16:07.123 "num_base_bdevs_discovered": 2, 00:16:07.123 "num_base_bdevs_operational": 2, 00:16:07.123 "base_bdevs_list": [ 00:16:07.123 { 00:16:07.123 "name": null, 00:16:07.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.123 "is_configured": false, 00:16:07.123 "data_offset": 0, 00:16:07.123 "data_size": 63488 00:16:07.123 }, 00:16:07.123 { 00:16:07.123 "name": "BaseBdev2", 00:16:07.123 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:07.123 "is_configured": true, 00:16:07.123 "data_offset": 2048, 00:16:07.123 "data_size": 63488 00:16:07.123 }, 00:16:07.123 { 00:16:07.123 "name": "BaseBdev3", 00:16:07.123 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:07.123 "is_configured": true, 00:16:07.123 "data_offset": 2048, 00:16:07.123 "data_size": 63488 00:16:07.123 } 00:16:07.123 ] 00:16:07.123 }' 00:16:07.123 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.123 07:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.382 07:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.382 07:47:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.382 07:47:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.382 [2024-11-29 07:47:57.322777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.382 [2024-11-29 07:47:57.322990] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:07.382 [2024-11-29 07:47:57.323010] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:07.382 [2024-11-29 07:47:57.323058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.640 [2024-11-29 07:47:57.339041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:07.640 07:47:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.640 07:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:07.640 [2024-11-29 07:47:57.346182] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.578 "name": "raid_bdev1", 00:16:08.578 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:08.578 "strip_size_kb": 64, 00:16:08.578 "state": "online", 00:16:08.578 "raid_level": "raid5f", 00:16:08.578 "superblock": true, 00:16:08.578 "num_base_bdevs": 3, 00:16:08.578 "num_base_bdevs_discovered": 3, 00:16:08.578 "num_base_bdevs_operational": 3, 00:16:08.578 "process": { 00:16:08.578 "type": "rebuild", 00:16:08.578 "target": "spare", 00:16:08.578 "progress": { 00:16:08.578 "blocks": 20480, 00:16:08.578 "percent": 16 00:16:08.578 } 00:16:08.578 }, 00:16:08.578 "base_bdevs_list": [ 00:16:08.578 { 00:16:08.578 "name": "spare", 00:16:08.578 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:16:08.578 "is_configured": true, 00:16:08.578 "data_offset": 2048, 00:16:08.578 "data_size": 63488 00:16:08.578 }, 00:16:08.578 { 00:16:08.578 "name": "BaseBdev2", 00:16:08.578 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:08.578 "is_configured": true, 00:16:08.578 "data_offset": 2048, 00:16:08.578 "data_size": 63488 00:16:08.578 }, 00:16:08.578 { 00:16:08.578 "name": "BaseBdev3", 00:16:08.578 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:08.578 "is_configured": true, 00:16:08.578 "data_offset": 2048, 00:16:08.578 "data_size": 63488 00:16:08.578 } 00:16:08.578 ] 00:16:08.578 }' 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.578 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.578 [2024-11-29 07:47:58.473192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.838 [2024-11-29 07:47:58.553981] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:08.838 [2024-11-29 07:47:58.554038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.838 [2024-11-29 07:47:58.554069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.838 [2024-11-29 07:47:58.554078] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.838 "name": "raid_bdev1", 00:16:08.838 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:08.838 "strip_size_kb": 64, 00:16:08.838 "state": "online", 00:16:08.838 "raid_level": "raid5f", 00:16:08.838 "superblock": true, 00:16:08.838 "num_base_bdevs": 3, 00:16:08.838 "num_base_bdevs_discovered": 2, 00:16:08.838 "num_base_bdevs_operational": 2, 00:16:08.838 "base_bdevs_list": [ 00:16:08.838 { 00:16:08.838 "name": null, 00:16:08.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.838 "is_configured": false, 00:16:08.838 "data_offset": 0, 00:16:08.838 "data_size": 63488 00:16:08.838 }, 00:16:08.838 { 00:16:08.838 "name": "BaseBdev2", 00:16:08.838 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:08.838 "is_configured": true, 00:16:08.838 "data_offset": 2048, 00:16:08.838 "data_size": 63488 00:16:08.838 }, 00:16:08.838 { 00:16:08.838 "name": "BaseBdev3", 00:16:08.838 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:08.838 "is_configured": true, 00:16:08.838 "data_offset": 2048, 00:16:08.838 "data_size": 63488 00:16:08.838 } 00:16:08.838 ] 00:16:08.838 }' 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.838 07:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.407 07:47:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.407 07:47:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.407 07:47:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.407 [2024-11-29 07:47:59.074378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.407 [2024-11-29 07:47:59.074452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.407 [2024-11-29 07:47:59.074472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:09.407 [2024-11-29 07:47:59.074486] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.407 [2024-11-29 07:47:59.074964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.407 [2024-11-29 07:47:59.074993] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.407 [2024-11-29 07:47:59.075080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:09.407 [2024-11-29 07:47:59.075113] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:09.407 [2024-11-29 07:47:59.075123] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:09.407 [2024-11-29 07:47:59.075146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.407 [2024-11-29 07:47:59.090072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:09.407 spare 00:16:09.407 07:47:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.407 07:47:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:09.407 [2024-11-29 07:47:59.097319] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.343 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.343 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.343 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.343 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.343 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.343 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.343 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.343 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.343 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.343 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.343 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.343 "name": "raid_bdev1", 00:16:10.343 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:10.343 "strip_size_kb": 64, 00:16:10.343 "state": "online", 00:16:10.343 "raid_level": "raid5f", 00:16:10.343 "superblock": true, 00:16:10.343 "num_base_bdevs": 3, 00:16:10.343 "num_base_bdevs_discovered": 3, 00:16:10.343 "num_base_bdevs_operational": 3, 00:16:10.343 "process": { 00:16:10.343 "type": "rebuild", 00:16:10.343 "target": "spare", 00:16:10.343 "progress": { 00:16:10.343 "blocks": 20480, 00:16:10.343 "percent": 16 00:16:10.343 } 00:16:10.343 }, 00:16:10.343 "base_bdevs_list": [ 00:16:10.343 { 00:16:10.343 "name": "spare", 00:16:10.343 "uuid": "e743672a-956b-5388-a306-8b2dd7843484", 00:16:10.343 "is_configured": true, 00:16:10.343 "data_offset": 2048, 00:16:10.343 "data_size": 63488 00:16:10.343 }, 00:16:10.343 { 00:16:10.343 "name": "BaseBdev2", 00:16:10.343 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:10.343 "is_configured": true, 00:16:10.343 "data_offset": 2048, 00:16:10.343 "data_size": 63488 00:16:10.343 }, 00:16:10.343 { 00:16:10.343 "name": "BaseBdev3", 00:16:10.343 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:10.343 "is_configured": true, 00:16:10.343 "data_offset": 2048, 00:16:10.343 "data_size": 63488 00:16:10.343 } 00:16:10.343 ] 00:16:10.343 }' 00:16:10.344 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.344 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.344 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.344 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.344 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.344 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.344 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.344 [2024-11-29 07:48:00.252107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.602 [2024-11-29 07:48:00.304943] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.602 [2024-11-29 07:48:00.305012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.602 [2024-11-29 07:48:00.305031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.602 [2024-11-29 07:48:00.305038] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.602 "name": "raid_bdev1", 00:16:10.602 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:10.602 "strip_size_kb": 64, 00:16:10.602 "state": "online", 00:16:10.602 "raid_level": "raid5f", 00:16:10.602 "superblock": true, 00:16:10.602 "num_base_bdevs": 3, 00:16:10.602 "num_base_bdevs_discovered": 2, 00:16:10.602 "num_base_bdevs_operational": 2, 00:16:10.602 "base_bdevs_list": [ 00:16:10.602 { 00:16:10.602 "name": null, 00:16:10.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.602 "is_configured": false, 00:16:10.602 "data_offset": 0, 00:16:10.602 "data_size": 63488 00:16:10.602 }, 00:16:10.602 { 00:16:10.602 "name": "BaseBdev2", 00:16:10.602 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:10.602 "is_configured": true, 00:16:10.602 "data_offset": 2048, 00:16:10.602 "data_size": 63488 00:16:10.602 }, 00:16:10.602 { 00:16:10.602 "name": "BaseBdev3", 00:16:10.602 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:10.602 "is_configured": true, 00:16:10.602 "data_offset": 2048, 00:16:10.602 "data_size": 63488 00:16:10.602 } 00:16:10.602 ] 00:16:10.602 }' 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.602 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.170 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.170 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.170 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.170 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.170 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.170 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.171 "name": "raid_bdev1", 00:16:11.171 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:11.171 "strip_size_kb": 64, 00:16:11.171 "state": "online", 00:16:11.171 "raid_level": "raid5f", 00:16:11.171 "superblock": true, 00:16:11.171 "num_base_bdevs": 3, 00:16:11.171 "num_base_bdevs_discovered": 2, 00:16:11.171 "num_base_bdevs_operational": 2, 00:16:11.171 "base_bdevs_list": [ 00:16:11.171 { 00:16:11.171 "name": null, 00:16:11.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.171 "is_configured": false, 00:16:11.171 "data_offset": 0, 00:16:11.171 "data_size": 63488 00:16:11.171 }, 00:16:11.171 { 00:16:11.171 "name": "BaseBdev2", 00:16:11.171 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:11.171 "is_configured": true, 00:16:11.171 "data_offset": 2048, 00:16:11.171 "data_size": 63488 00:16:11.171 }, 00:16:11.171 { 00:16:11.171 "name": "BaseBdev3", 00:16:11.171 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:11.171 "is_configured": true, 00:16:11.171 "data_offset": 2048, 00:16:11.171 "data_size": 63488 00:16:11.171 } 00:16:11.171 ] 00:16:11.171 }' 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.171 [2024-11-29 07:48:00.985591] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:11.171 [2024-11-29 07:48:00.985661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.171 [2024-11-29 07:48:00.985685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:11.171 [2024-11-29 07:48:00.985694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.171 [2024-11-29 07:48:00.986167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.171 [2024-11-29 07:48:00.986193] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:11.171 [2024-11-29 07:48:00.986273] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:11.171 [2024-11-29 07:48:00.986288] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:11.171 [2024-11-29 07:48:00.986311] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:11.171 [2024-11-29 07:48:00.986322] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:11.171 BaseBdev1 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.171 07:48:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:12.108 07:48:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:12.108 07:48:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.108 07:48:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.108 07:48:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.108 07:48:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.108 07:48:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.108 07:48:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.108 07:48:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.108 07:48:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.108 07:48:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.108 07:48:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.108 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.108 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.108 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.108 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.108 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.108 "name": "raid_bdev1", 00:16:12.108 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:12.108 "strip_size_kb": 64, 00:16:12.108 "state": "online", 00:16:12.108 "raid_level": "raid5f", 00:16:12.108 "superblock": true, 00:16:12.108 "num_base_bdevs": 3, 00:16:12.108 "num_base_bdevs_discovered": 2, 00:16:12.108 "num_base_bdevs_operational": 2, 00:16:12.108 "base_bdevs_list": [ 00:16:12.108 { 00:16:12.108 "name": null, 00:16:12.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.108 "is_configured": false, 00:16:12.108 "data_offset": 0, 00:16:12.108 "data_size": 63488 00:16:12.108 }, 00:16:12.108 { 00:16:12.108 "name": "BaseBdev2", 00:16:12.108 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:12.108 "is_configured": true, 00:16:12.108 "data_offset": 2048, 00:16:12.108 "data_size": 63488 00:16:12.108 }, 00:16:12.108 { 00:16:12.108 "name": "BaseBdev3", 00:16:12.108 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:12.108 "is_configured": true, 00:16:12.108 "data_offset": 2048, 00:16:12.108 "data_size": 63488 00:16:12.108 } 00:16:12.108 ] 00:16:12.108 }' 00:16:12.108 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.108 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.678 "name": "raid_bdev1", 00:16:12.678 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:12.678 "strip_size_kb": 64, 00:16:12.678 "state": "online", 00:16:12.678 "raid_level": "raid5f", 00:16:12.678 "superblock": true, 00:16:12.678 "num_base_bdevs": 3, 00:16:12.678 "num_base_bdevs_discovered": 2, 00:16:12.678 "num_base_bdevs_operational": 2, 00:16:12.678 "base_bdevs_list": [ 00:16:12.678 { 00:16:12.678 "name": null, 00:16:12.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.678 "is_configured": false, 00:16:12.678 "data_offset": 0, 00:16:12.678 "data_size": 63488 00:16:12.678 }, 00:16:12.678 { 00:16:12.678 "name": "BaseBdev2", 00:16:12.678 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:12.678 "is_configured": true, 00:16:12.678 "data_offset": 2048, 00:16:12.678 "data_size": 63488 00:16:12.678 }, 00:16:12.678 { 00:16:12.678 "name": "BaseBdev3", 00:16:12.678 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:12.678 "is_configured": true, 00:16:12.678 "data_offset": 2048, 00:16:12.678 "data_size": 63488 00:16:12.678 } 00:16:12.678 ] 00:16:12.678 }' 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.678 [2024-11-29 07:48:02.551001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.678 [2024-11-29 07:48:02.551222] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:12.678 [2024-11-29 07:48:02.551242] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:12.678 request: 00:16:12.678 { 00:16:12.678 "base_bdev": "BaseBdev1", 00:16:12.678 "raid_bdev": "raid_bdev1", 00:16:12.678 "method": "bdev_raid_add_base_bdev", 00:16:12.678 "req_id": 1 00:16:12.678 } 00:16:12.678 Got JSON-RPC error response 00:16:12.678 response: 00:16:12.678 { 00:16:12.678 "code": -22, 00:16:12.678 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:12.678 } 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:12.678 07:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.058 "name": "raid_bdev1", 00:16:14.058 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:14.058 "strip_size_kb": 64, 00:16:14.058 "state": "online", 00:16:14.058 "raid_level": "raid5f", 00:16:14.058 "superblock": true, 00:16:14.058 "num_base_bdevs": 3, 00:16:14.058 "num_base_bdevs_discovered": 2, 00:16:14.058 "num_base_bdevs_operational": 2, 00:16:14.058 "base_bdevs_list": [ 00:16:14.058 { 00:16:14.058 "name": null, 00:16:14.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.058 "is_configured": false, 00:16:14.058 "data_offset": 0, 00:16:14.058 "data_size": 63488 00:16:14.058 }, 00:16:14.058 { 00:16:14.058 "name": "BaseBdev2", 00:16:14.058 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:14.058 "is_configured": true, 00:16:14.058 "data_offset": 2048, 00:16:14.058 "data_size": 63488 00:16:14.058 }, 00:16:14.058 { 00:16:14.058 "name": "BaseBdev3", 00:16:14.058 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:14.058 "is_configured": true, 00:16:14.058 "data_offset": 2048, 00:16:14.058 "data_size": 63488 00:16:14.058 } 00:16:14.058 ] 00:16:14.058 }' 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.058 07:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.318 "name": "raid_bdev1", 00:16:14.318 "uuid": "5a7c097d-3d18-49cb-8a1f-09d72ffa5030", 00:16:14.318 "strip_size_kb": 64, 00:16:14.318 "state": "online", 00:16:14.318 "raid_level": "raid5f", 00:16:14.318 "superblock": true, 00:16:14.318 "num_base_bdevs": 3, 00:16:14.318 "num_base_bdevs_discovered": 2, 00:16:14.318 "num_base_bdevs_operational": 2, 00:16:14.318 "base_bdevs_list": [ 00:16:14.318 { 00:16:14.318 "name": null, 00:16:14.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.318 "is_configured": false, 00:16:14.318 "data_offset": 0, 00:16:14.318 "data_size": 63488 00:16:14.318 }, 00:16:14.318 { 00:16:14.318 "name": "BaseBdev2", 00:16:14.318 "uuid": "33c75c2e-f06c-59fd-bc14-2b0856d6ddf3", 00:16:14.318 "is_configured": true, 00:16:14.318 "data_offset": 2048, 00:16:14.318 "data_size": 63488 00:16:14.318 }, 00:16:14.318 { 00:16:14.318 "name": "BaseBdev3", 00:16:14.318 "uuid": "374d9afc-5fec-53c1-aa60-589d42acccab", 00:16:14.318 "is_configured": true, 00:16:14.318 "data_offset": 2048, 00:16:14.318 "data_size": 63488 00:16:14.318 } 00:16:14.318 ] 00:16:14.318 }' 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81711 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81711 ']' 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81711 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81711 00:16:14.318 killing process with pid 81711 00:16:14.318 Received shutdown signal, test time was about 60.000000 seconds 00:16:14.318 00:16:14.318 Latency(us) 00:16:14.318 [2024-11-29T07:48:04.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.318 [2024-11-29T07:48:04.263Z] =================================================================================================================== 00:16:14.318 [2024-11-29T07:48:04.263Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81711' 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81711 00:16:14.318 [2024-11-29 07:48:04.140349] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.318 [2024-11-29 07:48:04.140462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.318 07:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81711 00:16:14.318 [2024-11-29 07:48:04.140524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.318 [2024-11-29 07:48:04.140536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:14.578 [2024-11-29 07:48:04.513055] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.955 07:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:15.955 00:16:15.955 real 0m22.831s 00:16:15.955 user 0m29.254s 00:16:15.955 sys 0m2.679s 00:16:15.955 07:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.955 07:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.955 ************************************ 00:16:15.955 END TEST raid5f_rebuild_test_sb 00:16:15.955 ************************************ 00:16:15.955 07:48:05 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:15.955 07:48:05 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:15.955 07:48:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:15.955 07:48:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.955 07:48:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.955 ************************************ 00:16:15.955 START TEST raid5f_state_function_test 00:16:15.955 ************************************ 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82454 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:15.955 Process raid pid: 82454 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82454' 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82454 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82454 ']' 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.955 07:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.956 07:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.956 07:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.956 07:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.956 [2024-11-29 07:48:05.736383] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:15.956 [2024-11-29 07:48:05.736502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.215 [2024-11-29 07:48:05.906896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.215 [2024-11-29 07:48:06.011146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.543 [2024-11-29 07:48:06.195463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.543 [2024-11-29 07:48:06.195496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.833 [2024-11-29 07:48:06.566440] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:16.833 [2024-11-29 07:48:06.566510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:16.833 [2024-11-29 07:48:06.566520] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.833 [2024-11-29 07:48:06.566530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.833 [2024-11-29 07:48:06.566536] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:16.833 [2024-11-29 07:48:06.566545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:16.833 [2024-11-29 07:48:06.566551] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:16.833 [2024-11-29 07:48:06.566560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.833 "name": "Existed_Raid", 00:16:16.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.833 "strip_size_kb": 64, 00:16:16.833 "state": "configuring", 00:16:16.833 "raid_level": "raid5f", 00:16:16.833 "superblock": false, 00:16:16.833 "num_base_bdevs": 4, 00:16:16.833 "num_base_bdevs_discovered": 0, 00:16:16.833 "num_base_bdevs_operational": 4, 00:16:16.833 "base_bdevs_list": [ 00:16:16.833 { 00:16:16.833 "name": "BaseBdev1", 00:16:16.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.833 "is_configured": false, 00:16:16.833 "data_offset": 0, 00:16:16.833 "data_size": 0 00:16:16.833 }, 00:16:16.833 { 00:16:16.833 "name": "BaseBdev2", 00:16:16.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.833 "is_configured": false, 00:16:16.833 "data_offset": 0, 00:16:16.833 "data_size": 0 00:16:16.833 }, 00:16:16.833 { 00:16:16.833 "name": "BaseBdev3", 00:16:16.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.833 "is_configured": false, 00:16:16.833 "data_offset": 0, 00:16:16.833 "data_size": 0 00:16:16.833 }, 00:16:16.833 { 00:16:16.833 "name": "BaseBdev4", 00:16:16.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.833 "is_configured": false, 00:16:16.833 "data_offset": 0, 00:16:16.833 "data_size": 0 00:16:16.833 } 00:16:16.833 ] 00:16:16.833 }' 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.833 07:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.094 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:17.094 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.094 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.094 [2024-11-29 07:48:07.017595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.094 [2024-11-29 07:48:07.017636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:17.094 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.094 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:17.094 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.094 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.094 [2024-11-29 07:48:07.029587] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.094 [2024-11-29 07:48:07.029628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.094 [2024-11-29 07:48:07.029637] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.094 [2024-11-29 07:48:07.029646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.094 [2024-11-29 07:48:07.029652] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.094 [2024-11-29 07:48:07.029660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.094 [2024-11-29 07:48:07.029665] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:17.094 [2024-11-29 07:48:07.029673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:17.094 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.094 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:17.094 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.094 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.366 [2024-11-29 07:48:07.074945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.366 BaseBdev1 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.366 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.366 [ 00:16:17.366 { 00:16:17.366 "name": "BaseBdev1", 00:16:17.366 "aliases": [ 00:16:17.366 "2a2c4a50-5fa9-4d96-85e4-e21057c52c54" 00:16:17.366 ], 00:16:17.366 "product_name": "Malloc disk", 00:16:17.366 "block_size": 512, 00:16:17.366 "num_blocks": 65536, 00:16:17.366 "uuid": "2a2c4a50-5fa9-4d96-85e4-e21057c52c54", 00:16:17.366 "assigned_rate_limits": { 00:16:17.366 "rw_ios_per_sec": 0, 00:16:17.366 "rw_mbytes_per_sec": 0, 00:16:17.366 "r_mbytes_per_sec": 0, 00:16:17.366 "w_mbytes_per_sec": 0 00:16:17.366 }, 00:16:17.366 "claimed": true, 00:16:17.366 "claim_type": "exclusive_write", 00:16:17.366 "zoned": false, 00:16:17.366 "supported_io_types": { 00:16:17.366 "read": true, 00:16:17.366 "write": true, 00:16:17.366 "unmap": true, 00:16:17.366 "flush": true, 00:16:17.366 "reset": true, 00:16:17.366 "nvme_admin": false, 00:16:17.366 "nvme_io": false, 00:16:17.366 "nvme_io_md": false, 00:16:17.366 "write_zeroes": true, 00:16:17.366 "zcopy": true, 00:16:17.366 "get_zone_info": false, 00:16:17.366 "zone_management": false, 00:16:17.366 "zone_append": false, 00:16:17.366 "compare": false, 00:16:17.366 "compare_and_write": false, 00:16:17.366 "abort": true, 00:16:17.366 "seek_hole": false, 00:16:17.366 "seek_data": false, 00:16:17.366 "copy": true, 00:16:17.366 "nvme_iov_md": false 00:16:17.366 }, 00:16:17.366 "memory_domains": [ 00:16:17.366 { 00:16:17.366 "dma_device_id": "system", 00:16:17.367 "dma_device_type": 1 00:16:17.367 }, 00:16:17.367 { 00:16:17.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.367 "dma_device_type": 2 00:16:17.367 } 00:16:17.367 ], 00:16:17.367 "driver_specific": {} 00:16:17.367 } 00:16:17.367 ] 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.367 "name": "Existed_Raid", 00:16:17.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.367 "strip_size_kb": 64, 00:16:17.367 "state": "configuring", 00:16:17.367 "raid_level": "raid5f", 00:16:17.367 "superblock": false, 00:16:17.367 "num_base_bdevs": 4, 00:16:17.367 "num_base_bdevs_discovered": 1, 00:16:17.367 "num_base_bdevs_operational": 4, 00:16:17.367 "base_bdevs_list": [ 00:16:17.367 { 00:16:17.367 "name": "BaseBdev1", 00:16:17.367 "uuid": "2a2c4a50-5fa9-4d96-85e4-e21057c52c54", 00:16:17.367 "is_configured": true, 00:16:17.367 "data_offset": 0, 00:16:17.367 "data_size": 65536 00:16:17.367 }, 00:16:17.367 { 00:16:17.367 "name": "BaseBdev2", 00:16:17.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.367 "is_configured": false, 00:16:17.367 "data_offset": 0, 00:16:17.367 "data_size": 0 00:16:17.367 }, 00:16:17.367 { 00:16:17.367 "name": "BaseBdev3", 00:16:17.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.367 "is_configured": false, 00:16:17.367 "data_offset": 0, 00:16:17.367 "data_size": 0 00:16:17.367 }, 00:16:17.367 { 00:16:17.367 "name": "BaseBdev4", 00:16:17.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.367 "is_configured": false, 00:16:17.367 "data_offset": 0, 00:16:17.367 "data_size": 0 00:16:17.367 } 00:16:17.367 ] 00:16:17.367 }' 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.367 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.628 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:17.628 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.628 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.888 [2024-11-29 07:48:07.574158] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.888 [2024-11-29 07:48:07.574208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.888 [2024-11-29 07:48:07.586168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.888 [2024-11-29 07:48:07.587921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.888 [2024-11-29 07:48:07.587962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.888 [2024-11-29 07:48:07.587972] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.888 [2024-11-29 07:48:07.587982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.888 [2024-11-29 07:48:07.587988] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:17.888 [2024-11-29 07:48:07.587997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.888 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.888 "name": "Existed_Raid", 00:16:17.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.888 "strip_size_kb": 64, 00:16:17.888 "state": "configuring", 00:16:17.888 "raid_level": "raid5f", 00:16:17.888 "superblock": false, 00:16:17.888 "num_base_bdevs": 4, 00:16:17.888 "num_base_bdevs_discovered": 1, 00:16:17.888 "num_base_bdevs_operational": 4, 00:16:17.888 "base_bdevs_list": [ 00:16:17.888 { 00:16:17.888 "name": "BaseBdev1", 00:16:17.888 "uuid": "2a2c4a50-5fa9-4d96-85e4-e21057c52c54", 00:16:17.888 "is_configured": true, 00:16:17.888 "data_offset": 0, 00:16:17.888 "data_size": 65536 00:16:17.888 }, 00:16:17.888 { 00:16:17.888 "name": "BaseBdev2", 00:16:17.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.888 "is_configured": false, 00:16:17.888 "data_offset": 0, 00:16:17.888 "data_size": 0 00:16:17.888 }, 00:16:17.888 { 00:16:17.888 "name": "BaseBdev3", 00:16:17.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.888 "is_configured": false, 00:16:17.888 "data_offset": 0, 00:16:17.888 "data_size": 0 00:16:17.889 }, 00:16:17.889 { 00:16:17.889 "name": "BaseBdev4", 00:16:17.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.889 "is_configured": false, 00:16:17.889 "data_offset": 0, 00:16:17.889 "data_size": 0 00:16:17.889 } 00:16:17.889 ] 00:16:17.889 }' 00:16:17.889 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.889 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.150 07:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:18.150 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.150 07:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.150 [2024-11-29 07:48:08.016467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.150 BaseBdev2 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.150 [ 00:16:18.150 { 00:16:18.150 "name": "BaseBdev2", 00:16:18.150 "aliases": [ 00:16:18.150 "cf67cd5c-efe8-44f2-9e94-3c818c12e5b7" 00:16:18.150 ], 00:16:18.150 "product_name": "Malloc disk", 00:16:18.150 "block_size": 512, 00:16:18.150 "num_blocks": 65536, 00:16:18.150 "uuid": "cf67cd5c-efe8-44f2-9e94-3c818c12e5b7", 00:16:18.150 "assigned_rate_limits": { 00:16:18.150 "rw_ios_per_sec": 0, 00:16:18.150 "rw_mbytes_per_sec": 0, 00:16:18.150 "r_mbytes_per_sec": 0, 00:16:18.150 "w_mbytes_per_sec": 0 00:16:18.150 }, 00:16:18.150 "claimed": true, 00:16:18.150 "claim_type": "exclusive_write", 00:16:18.150 "zoned": false, 00:16:18.150 "supported_io_types": { 00:16:18.150 "read": true, 00:16:18.150 "write": true, 00:16:18.150 "unmap": true, 00:16:18.150 "flush": true, 00:16:18.150 "reset": true, 00:16:18.150 "nvme_admin": false, 00:16:18.150 "nvme_io": false, 00:16:18.150 "nvme_io_md": false, 00:16:18.150 "write_zeroes": true, 00:16:18.150 "zcopy": true, 00:16:18.150 "get_zone_info": false, 00:16:18.150 "zone_management": false, 00:16:18.150 "zone_append": false, 00:16:18.150 "compare": false, 00:16:18.150 "compare_and_write": false, 00:16:18.150 "abort": true, 00:16:18.150 "seek_hole": false, 00:16:18.150 "seek_data": false, 00:16:18.150 "copy": true, 00:16:18.150 "nvme_iov_md": false 00:16:18.150 }, 00:16:18.150 "memory_domains": [ 00:16:18.150 { 00:16:18.150 "dma_device_id": "system", 00:16:18.150 "dma_device_type": 1 00:16:18.150 }, 00:16:18.150 { 00:16:18.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.150 "dma_device_type": 2 00:16:18.150 } 00:16:18.150 ], 00:16:18.150 "driver_specific": {} 00:16:18.150 } 00:16:18.150 ] 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.150 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.411 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.411 "name": "Existed_Raid", 00:16:18.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.411 "strip_size_kb": 64, 00:16:18.411 "state": "configuring", 00:16:18.411 "raid_level": "raid5f", 00:16:18.411 "superblock": false, 00:16:18.411 "num_base_bdevs": 4, 00:16:18.411 "num_base_bdevs_discovered": 2, 00:16:18.411 "num_base_bdevs_operational": 4, 00:16:18.411 "base_bdevs_list": [ 00:16:18.411 { 00:16:18.411 "name": "BaseBdev1", 00:16:18.411 "uuid": "2a2c4a50-5fa9-4d96-85e4-e21057c52c54", 00:16:18.411 "is_configured": true, 00:16:18.411 "data_offset": 0, 00:16:18.411 "data_size": 65536 00:16:18.411 }, 00:16:18.411 { 00:16:18.411 "name": "BaseBdev2", 00:16:18.411 "uuid": "cf67cd5c-efe8-44f2-9e94-3c818c12e5b7", 00:16:18.411 "is_configured": true, 00:16:18.411 "data_offset": 0, 00:16:18.411 "data_size": 65536 00:16:18.411 }, 00:16:18.411 { 00:16:18.411 "name": "BaseBdev3", 00:16:18.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.411 "is_configured": false, 00:16:18.411 "data_offset": 0, 00:16:18.411 "data_size": 0 00:16:18.411 }, 00:16:18.411 { 00:16:18.411 "name": "BaseBdev4", 00:16:18.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.411 "is_configured": false, 00:16:18.411 "data_offset": 0, 00:16:18.411 "data_size": 0 00:16:18.411 } 00:16:18.411 ] 00:16:18.411 }' 00:16:18.411 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.411 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.670 [2024-11-29 07:48:08.542894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.670 BaseBdev3 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.670 [ 00:16:18.670 { 00:16:18.670 "name": "BaseBdev3", 00:16:18.670 "aliases": [ 00:16:18.670 "9bb8ca0a-a2e3-4507-b4d1-f563d60c6c96" 00:16:18.670 ], 00:16:18.670 "product_name": "Malloc disk", 00:16:18.670 "block_size": 512, 00:16:18.670 "num_blocks": 65536, 00:16:18.670 "uuid": "9bb8ca0a-a2e3-4507-b4d1-f563d60c6c96", 00:16:18.670 "assigned_rate_limits": { 00:16:18.670 "rw_ios_per_sec": 0, 00:16:18.670 "rw_mbytes_per_sec": 0, 00:16:18.670 "r_mbytes_per_sec": 0, 00:16:18.670 "w_mbytes_per_sec": 0 00:16:18.670 }, 00:16:18.670 "claimed": true, 00:16:18.670 "claim_type": "exclusive_write", 00:16:18.670 "zoned": false, 00:16:18.670 "supported_io_types": { 00:16:18.670 "read": true, 00:16:18.670 "write": true, 00:16:18.670 "unmap": true, 00:16:18.670 "flush": true, 00:16:18.670 "reset": true, 00:16:18.670 "nvme_admin": false, 00:16:18.670 "nvme_io": false, 00:16:18.670 "nvme_io_md": false, 00:16:18.670 "write_zeroes": true, 00:16:18.670 "zcopy": true, 00:16:18.670 "get_zone_info": false, 00:16:18.670 "zone_management": false, 00:16:18.670 "zone_append": false, 00:16:18.670 "compare": false, 00:16:18.670 "compare_and_write": false, 00:16:18.670 "abort": true, 00:16:18.670 "seek_hole": false, 00:16:18.670 "seek_data": false, 00:16:18.670 "copy": true, 00:16:18.670 "nvme_iov_md": false 00:16:18.670 }, 00:16:18.670 "memory_domains": [ 00:16:18.670 { 00:16:18.670 "dma_device_id": "system", 00:16:18.670 "dma_device_type": 1 00:16:18.670 }, 00:16:18.670 { 00:16:18.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.670 "dma_device_type": 2 00:16:18.670 } 00:16:18.670 ], 00:16:18.670 "driver_specific": {} 00:16:18.670 } 00:16:18.670 ] 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.670 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.671 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.671 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.671 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.671 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.671 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.671 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.671 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.671 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.671 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.671 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.671 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.671 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.930 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.930 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.930 "name": "Existed_Raid", 00:16:18.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.930 "strip_size_kb": 64, 00:16:18.930 "state": "configuring", 00:16:18.930 "raid_level": "raid5f", 00:16:18.930 "superblock": false, 00:16:18.930 "num_base_bdevs": 4, 00:16:18.930 "num_base_bdevs_discovered": 3, 00:16:18.930 "num_base_bdevs_operational": 4, 00:16:18.930 "base_bdevs_list": [ 00:16:18.930 { 00:16:18.930 "name": "BaseBdev1", 00:16:18.930 "uuid": "2a2c4a50-5fa9-4d96-85e4-e21057c52c54", 00:16:18.930 "is_configured": true, 00:16:18.930 "data_offset": 0, 00:16:18.930 "data_size": 65536 00:16:18.930 }, 00:16:18.930 { 00:16:18.930 "name": "BaseBdev2", 00:16:18.930 "uuid": "cf67cd5c-efe8-44f2-9e94-3c818c12e5b7", 00:16:18.930 "is_configured": true, 00:16:18.930 "data_offset": 0, 00:16:18.930 "data_size": 65536 00:16:18.930 }, 00:16:18.930 { 00:16:18.930 "name": "BaseBdev3", 00:16:18.930 "uuid": "9bb8ca0a-a2e3-4507-b4d1-f563d60c6c96", 00:16:18.930 "is_configured": true, 00:16:18.930 "data_offset": 0, 00:16:18.930 "data_size": 65536 00:16:18.930 }, 00:16:18.930 { 00:16:18.930 "name": "BaseBdev4", 00:16:18.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.930 "is_configured": false, 00:16:18.930 "data_offset": 0, 00:16:18.930 "data_size": 0 00:16:18.930 } 00:16:18.930 ] 00:16:18.930 }' 00:16:18.930 07:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.930 07:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.191 [2024-11-29 07:48:09.064587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:19.191 [2024-11-29 07:48:09.064650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:19.191 [2024-11-29 07:48:09.064659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:19.191 [2024-11-29 07:48:09.064901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:19.191 [2024-11-29 07:48:09.071604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:19.191 [2024-11-29 07:48:09.071670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:19.191 [2024-11-29 07:48:09.071961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.191 BaseBdev4 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.191 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.191 [ 00:16:19.191 { 00:16:19.191 "name": "BaseBdev4", 00:16:19.191 "aliases": [ 00:16:19.191 "7776aae7-bfbf-4e3d-a273-5acea9d25421" 00:16:19.191 ], 00:16:19.191 "product_name": "Malloc disk", 00:16:19.191 "block_size": 512, 00:16:19.191 "num_blocks": 65536, 00:16:19.191 "uuid": "7776aae7-bfbf-4e3d-a273-5acea9d25421", 00:16:19.191 "assigned_rate_limits": { 00:16:19.191 "rw_ios_per_sec": 0, 00:16:19.191 "rw_mbytes_per_sec": 0, 00:16:19.191 "r_mbytes_per_sec": 0, 00:16:19.191 "w_mbytes_per_sec": 0 00:16:19.191 }, 00:16:19.191 "claimed": true, 00:16:19.191 "claim_type": "exclusive_write", 00:16:19.191 "zoned": false, 00:16:19.191 "supported_io_types": { 00:16:19.191 "read": true, 00:16:19.191 "write": true, 00:16:19.191 "unmap": true, 00:16:19.191 "flush": true, 00:16:19.191 "reset": true, 00:16:19.191 "nvme_admin": false, 00:16:19.191 "nvme_io": false, 00:16:19.191 "nvme_io_md": false, 00:16:19.191 "write_zeroes": true, 00:16:19.192 "zcopy": true, 00:16:19.192 "get_zone_info": false, 00:16:19.192 "zone_management": false, 00:16:19.192 "zone_append": false, 00:16:19.192 "compare": false, 00:16:19.192 "compare_and_write": false, 00:16:19.192 "abort": true, 00:16:19.192 "seek_hole": false, 00:16:19.192 "seek_data": false, 00:16:19.192 "copy": true, 00:16:19.192 "nvme_iov_md": false 00:16:19.192 }, 00:16:19.192 "memory_domains": [ 00:16:19.192 { 00:16:19.192 "dma_device_id": "system", 00:16:19.192 "dma_device_type": 1 00:16:19.192 }, 00:16:19.192 { 00:16:19.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.192 "dma_device_type": 2 00:16:19.192 } 00:16:19.192 ], 00:16:19.192 "driver_specific": {} 00:16:19.192 } 00:16:19.192 ] 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.192 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.452 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.452 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.452 "name": "Existed_Raid", 00:16:19.452 "uuid": "3d9921de-dc7a-401a-992e-e118d34384af", 00:16:19.452 "strip_size_kb": 64, 00:16:19.452 "state": "online", 00:16:19.452 "raid_level": "raid5f", 00:16:19.452 "superblock": false, 00:16:19.452 "num_base_bdevs": 4, 00:16:19.452 "num_base_bdevs_discovered": 4, 00:16:19.452 "num_base_bdevs_operational": 4, 00:16:19.452 "base_bdevs_list": [ 00:16:19.452 { 00:16:19.452 "name": "BaseBdev1", 00:16:19.452 "uuid": "2a2c4a50-5fa9-4d96-85e4-e21057c52c54", 00:16:19.452 "is_configured": true, 00:16:19.452 "data_offset": 0, 00:16:19.452 "data_size": 65536 00:16:19.452 }, 00:16:19.452 { 00:16:19.452 "name": "BaseBdev2", 00:16:19.452 "uuid": "cf67cd5c-efe8-44f2-9e94-3c818c12e5b7", 00:16:19.452 "is_configured": true, 00:16:19.452 "data_offset": 0, 00:16:19.452 "data_size": 65536 00:16:19.452 }, 00:16:19.452 { 00:16:19.452 "name": "BaseBdev3", 00:16:19.452 "uuid": "9bb8ca0a-a2e3-4507-b4d1-f563d60c6c96", 00:16:19.452 "is_configured": true, 00:16:19.452 "data_offset": 0, 00:16:19.452 "data_size": 65536 00:16:19.452 }, 00:16:19.452 { 00:16:19.452 "name": "BaseBdev4", 00:16:19.452 "uuid": "7776aae7-bfbf-4e3d-a273-5acea9d25421", 00:16:19.452 "is_configured": true, 00:16:19.452 "data_offset": 0, 00:16:19.452 "data_size": 65536 00:16:19.452 } 00:16:19.452 ] 00:16:19.452 }' 00:16:19.452 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.452 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.712 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:19.712 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:19.712 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:19.712 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:19.712 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:19.712 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:19.712 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:19.712 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.712 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.712 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:19.712 [2024-11-29 07:48:09.551173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.712 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.712 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:19.712 "name": "Existed_Raid", 00:16:19.712 "aliases": [ 00:16:19.712 "3d9921de-dc7a-401a-992e-e118d34384af" 00:16:19.712 ], 00:16:19.712 "product_name": "Raid Volume", 00:16:19.712 "block_size": 512, 00:16:19.712 "num_blocks": 196608, 00:16:19.712 "uuid": "3d9921de-dc7a-401a-992e-e118d34384af", 00:16:19.712 "assigned_rate_limits": { 00:16:19.712 "rw_ios_per_sec": 0, 00:16:19.712 "rw_mbytes_per_sec": 0, 00:16:19.712 "r_mbytes_per_sec": 0, 00:16:19.712 "w_mbytes_per_sec": 0 00:16:19.712 }, 00:16:19.712 "claimed": false, 00:16:19.712 "zoned": false, 00:16:19.712 "supported_io_types": { 00:16:19.712 "read": true, 00:16:19.712 "write": true, 00:16:19.712 "unmap": false, 00:16:19.712 "flush": false, 00:16:19.712 "reset": true, 00:16:19.712 "nvme_admin": false, 00:16:19.712 "nvme_io": false, 00:16:19.712 "nvme_io_md": false, 00:16:19.712 "write_zeroes": true, 00:16:19.712 "zcopy": false, 00:16:19.712 "get_zone_info": false, 00:16:19.712 "zone_management": false, 00:16:19.712 "zone_append": false, 00:16:19.712 "compare": false, 00:16:19.712 "compare_and_write": false, 00:16:19.712 "abort": false, 00:16:19.712 "seek_hole": false, 00:16:19.712 "seek_data": false, 00:16:19.712 "copy": false, 00:16:19.712 "nvme_iov_md": false 00:16:19.712 }, 00:16:19.712 "driver_specific": { 00:16:19.712 "raid": { 00:16:19.712 "uuid": "3d9921de-dc7a-401a-992e-e118d34384af", 00:16:19.712 "strip_size_kb": 64, 00:16:19.712 "state": "online", 00:16:19.712 "raid_level": "raid5f", 00:16:19.712 "superblock": false, 00:16:19.712 "num_base_bdevs": 4, 00:16:19.712 "num_base_bdevs_discovered": 4, 00:16:19.712 "num_base_bdevs_operational": 4, 00:16:19.712 "base_bdevs_list": [ 00:16:19.712 { 00:16:19.712 "name": "BaseBdev1", 00:16:19.712 "uuid": "2a2c4a50-5fa9-4d96-85e4-e21057c52c54", 00:16:19.712 "is_configured": true, 00:16:19.712 "data_offset": 0, 00:16:19.712 "data_size": 65536 00:16:19.712 }, 00:16:19.713 { 00:16:19.713 "name": "BaseBdev2", 00:16:19.713 "uuid": "cf67cd5c-efe8-44f2-9e94-3c818c12e5b7", 00:16:19.713 "is_configured": true, 00:16:19.713 "data_offset": 0, 00:16:19.713 "data_size": 65536 00:16:19.713 }, 00:16:19.713 { 00:16:19.713 "name": "BaseBdev3", 00:16:19.713 "uuid": "9bb8ca0a-a2e3-4507-b4d1-f563d60c6c96", 00:16:19.713 "is_configured": true, 00:16:19.713 "data_offset": 0, 00:16:19.713 "data_size": 65536 00:16:19.713 }, 00:16:19.713 { 00:16:19.713 "name": "BaseBdev4", 00:16:19.713 "uuid": "7776aae7-bfbf-4e3d-a273-5acea9d25421", 00:16:19.713 "is_configured": true, 00:16:19.713 "data_offset": 0, 00:16:19.713 "data_size": 65536 00:16:19.713 } 00:16:19.713 ] 00:16:19.713 } 00:16:19.713 } 00:16:19.713 }' 00:16:19.713 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.713 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:19.713 BaseBdev2 00:16:19.713 BaseBdev3 00:16:19.713 BaseBdev4' 00:16:19.713 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.973 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.973 [2024-11-29 07:48:09.850461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.233 "name": "Existed_Raid", 00:16:20.233 "uuid": "3d9921de-dc7a-401a-992e-e118d34384af", 00:16:20.233 "strip_size_kb": 64, 00:16:20.233 "state": "online", 00:16:20.233 "raid_level": "raid5f", 00:16:20.233 "superblock": false, 00:16:20.233 "num_base_bdevs": 4, 00:16:20.233 "num_base_bdevs_discovered": 3, 00:16:20.233 "num_base_bdevs_operational": 3, 00:16:20.233 "base_bdevs_list": [ 00:16:20.233 { 00:16:20.233 "name": null, 00:16:20.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.233 "is_configured": false, 00:16:20.233 "data_offset": 0, 00:16:20.233 "data_size": 65536 00:16:20.233 }, 00:16:20.233 { 00:16:20.233 "name": "BaseBdev2", 00:16:20.233 "uuid": "cf67cd5c-efe8-44f2-9e94-3c818c12e5b7", 00:16:20.233 "is_configured": true, 00:16:20.233 "data_offset": 0, 00:16:20.233 "data_size": 65536 00:16:20.233 }, 00:16:20.233 { 00:16:20.233 "name": "BaseBdev3", 00:16:20.233 "uuid": "9bb8ca0a-a2e3-4507-b4d1-f563d60c6c96", 00:16:20.233 "is_configured": true, 00:16:20.233 "data_offset": 0, 00:16:20.233 "data_size": 65536 00:16:20.233 }, 00:16:20.233 { 00:16:20.233 "name": "BaseBdev4", 00:16:20.233 "uuid": "7776aae7-bfbf-4e3d-a273-5acea9d25421", 00:16:20.233 "is_configured": true, 00:16:20.233 "data_offset": 0, 00:16:20.233 "data_size": 65536 00:16:20.233 } 00:16:20.233 ] 00:16:20.233 }' 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.233 07:48:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.493 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:20.493 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.493 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.493 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.493 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.493 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.493 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.493 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.493 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.493 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:20.493 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.493 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.493 [2024-11-29 07:48:10.416984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:20.493 [2024-11-29 07:48:10.417130] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.753 [2024-11-29 07:48:10.509291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.753 [2024-11-29 07:48:10.565251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.753 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.014 [2024-11-29 07:48:10.712918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:21.014 [2024-11-29 07:48:10.712969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.014 BaseBdev2 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.014 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.014 [ 00:16:21.014 { 00:16:21.014 "name": "BaseBdev2", 00:16:21.014 "aliases": [ 00:16:21.014 "e025d29c-8cb2-42f4-a799-b065fb9cc313" 00:16:21.014 ], 00:16:21.014 "product_name": "Malloc disk", 00:16:21.014 "block_size": 512, 00:16:21.014 "num_blocks": 65536, 00:16:21.014 "uuid": "e025d29c-8cb2-42f4-a799-b065fb9cc313", 00:16:21.014 "assigned_rate_limits": { 00:16:21.014 "rw_ios_per_sec": 0, 00:16:21.014 "rw_mbytes_per_sec": 0, 00:16:21.014 "r_mbytes_per_sec": 0, 00:16:21.014 "w_mbytes_per_sec": 0 00:16:21.014 }, 00:16:21.014 "claimed": false, 00:16:21.014 "zoned": false, 00:16:21.014 "supported_io_types": { 00:16:21.014 "read": true, 00:16:21.014 "write": true, 00:16:21.014 "unmap": true, 00:16:21.014 "flush": true, 00:16:21.014 "reset": true, 00:16:21.014 "nvme_admin": false, 00:16:21.014 "nvme_io": false, 00:16:21.014 "nvme_io_md": false, 00:16:21.014 "write_zeroes": true, 00:16:21.014 "zcopy": true, 00:16:21.014 "get_zone_info": false, 00:16:21.014 "zone_management": false, 00:16:21.014 "zone_append": false, 00:16:21.014 "compare": false, 00:16:21.014 "compare_and_write": false, 00:16:21.014 "abort": true, 00:16:21.014 "seek_hole": false, 00:16:21.014 "seek_data": false, 00:16:21.014 "copy": true, 00:16:21.014 "nvme_iov_md": false 00:16:21.014 }, 00:16:21.014 "memory_domains": [ 00:16:21.014 { 00:16:21.014 "dma_device_id": "system", 00:16:21.014 "dma_device_type": 1 00:16:21.014 }, 00:16:21.014 { 00:16:21.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.014 "dma_device_type": 2 00:16:21.014 } 00:16:21.014 ], 00:16:21.014 "driver_specific": {} 00:16:21.015 } 00:16:21.015 ] 00:16:21.015 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.015 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:21.015 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.015 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.015 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:21.015 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.015 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.276 BaseBdev3 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.276 07:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.276 [ 00:16:21.276 { 00:16:21.276 "name": "BaseBdev3", 00:16:21.276 "aliases": [ 00:16:21.277 "a5b0817c-6c97-4779-9591-d14ab1c7c295" 00:16:21.277 ], 00:16:21.277 "product_name": "Malloc disk", 00:16:21.277 "block_size": 512, 00:16:21.277 "num_blocks": 65536, 00:16:21.277 "uuid": "a5b0817c-6c97-4779-9591-d14ab1c7c295", 00:16:21.277 "assigned_rate_limits": { 00:16:21.277 "rw_ios_per_sec": 0, 00:16:21.277 "rw_mbytes_per_sec": 0, 00:16:21.277 "r_mbytes_per_sec": 0, 00:16:21.277 "w_mbytes_per_sec": 0 00:16:21.277 }, 00:16:21.277 "claimed": false, 00:16:21.277 "zoned": false, 00:16:21.277 "supported_io_types": { 00:16:21.277 "read": true, 00:16:21.277 "write": true, 00:16:21.277 "unmap": true, 00:16:21.277 "flush": true, 00:16:21.277 "reset": true, 00:16:21.277 "nvme_admin": false, 00:16:21.277 "nvme_io": false, 00:16:21.277 "nvme_io_md": false, 00:16:21.277 "write_zeroes": true, 00:16:21.277 "zcopy": true, 00:16:21.277 "get_zone_info": false, 00:16:21.277 "zone_management": false, 00:16:21.277 "zone_append": false, 00:16:21.277 "compare": false, 00:16:21.277 "compare_and_write": false, 00:16:21.277 "abort": true, 00:16:21.277 "seek_hole": false, 00:16:21.277 "seek_data": false, 00:16:21.277 "copy": true, 00:16:21.277 "nvme_iov_md": false 00:16:21.277 }, 00:16:21.277 "memory_domains": [ 00:16:21.277 { 00:16:21.277 "dma_device_id": "system", 00:16:21.277 "dma_device_type": 1 00:16:21.277 }, 00:16:21.277 { 00:16:21.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.277 "dma_device_type": 2 00:16:21.277 } 00:16:21.277 ], 00:16:21.277 "driver_specific": {} 00:16:21.277 } 00:16:21.277 ] 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.277 BaseBdev4 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.277 [ 00:16:21.277 { 00:16:21.277 "name": "BaseBdev4", 00:16:21.277 "aliases": [ 00:16:21.277 "813703f8-6754-44c7-a7f4-016ec683e191" 00:16:21.277 ], 00:16:21.277 "product_name": "Malloc disk", 00:16:21.277 "block_size": 512, 00:16:21.277 "num_blocks": 65536, 00:16:21.277 "uuid": "813703f8-6754-44c7-a7f4-016ec683e191", 00:16:21.277 "assigned_rate_limits": { 00:16:21.277 "rw_ios_per_sec": 0, 00:16:21.277 "rw_mbytes_per_sec": 0, 00:16:21.277 "r_mbytes_per_sec": 0, 00:16:21.277 "w_mbytes_per_sec": 0 00:16:21.277 }, 00:16:21.277 "claimed": false, 00:16:21.277 "zoned": false, 00:16:21.277 "supported_io_types": { 00:16:21.277 "read": true, 00:16:21.277 "write": true, 00:16:21.277 "unmap": true, 00:16:21.277 "flush": true, 00:16:21.277 "reset": true, 00:16:21.277 "nvme_admin": false, 00:16:21.277 "nvme_io": false, 00:16:21.277 "nvme_io_md": false, 00:16:21.277 "write_zeroes": true, 00:16:21.277 "zcopy": true, 00:16:21.277 "get_zone_info": false, 00:16:21.277 "zone_management": false, 00:16:21.277 "zone_append": false, 00:16:21.277 "compare": false, 00:16:21.277 "compare_and_write": false, 00:16:21.277 "abort": true, 00:16:21.277 "seek_hole": false, 00:16:21.277 "seek_data": false, 00:16:21.277 "copy": true, 00:16:21.277 "nvme_iov_md": false 00:16:21.277 }, 00:16:21.277 "memory_domains": [ 00:16:21.277 { 00:16:21.277 "dma_device_id": "system", 00:16:21.277 "dma_device_type": 1 00:16:21.277 }, 00:16:21.277 { 00:16:21.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.277 "dma_device_type": 2 00:16:21.277 } 00:16:21.277 ], 00:16:21.277 "driver_specific": {} 00:16:21.277 } 00:16:21.277 ] 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.277 [2024-11-29 07:48:11.106779] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.277 [2024-11-29 07:48:11.106877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.277 [2024-11-29 07:48:11.106902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.277 [2024-11-29 07:48:11.108644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.277 [2024-11-29 07:48:11.108693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.277 "name": "Existed_Raid", 00:16:21.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.277 "strip_size_kb": 64, 00:16:21.277 "state": "configuring", 00:16:21.277 "raid_level": "raid5f", 00:16:21.277 "superblock": false, 00:16:21.277 "num_base_bdevs": 4, 00:16:21.277 "num_base_bdevs_discovered": 3, 00:16:21.277 "num_base_bdevs_operational": 4, 00:16:21.277 "base_bdevs_list": [ 00:16:21.277 { 00:16:21.277 "name": "BaseBdev1", 00:16:21.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.277 "is_configured": false, 00:16:21.277 "data_offset": 0, 00:16:21.277 "data_size": 0 00:16:21.277 }, 00:16:21.277 { 00:16:21.277 "name": "BaseBdev2", 00:16:21.277 "uuid": "e025d29c-8cb2-42f4-a799-b065fb9cc313", 00:16:21.277 "is_configured": true, 00:16:21.277 "data_offset": 0, 00:16:21.277 "data_size": 65536 00:16:21.277 }, 00:16:21.277 { 00:16:21.277 "name": "BaseBdev3", 00:16:21.277 "uuid": "a5b0817c-6c97-4779-9591-d14ab1c7c295", 00:16:21.277 "is_configured": true, 00:16:21.277 "data_offset": 0, 00:16:21.277 "data_size": 65536 00:16:21.277 }, 00:16:21.277 { 00:16:21.277 "name": "BaseBdev4", 00:16:21.277 "uuid": "813703f8-6754-44c7-a7f4-016ec683e191", 00:16:21.277 "is_configured": true, 00:16:21.277 "data_offset": 0, 00:16:21.277 "data_size": 65536 00:16:21.277 } 00:16:21.277 ] 00:16:21.277 }' 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.277 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.848 [2024-11-29 07:48:11.518072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.848 "name": "Existed_Raid", 00:16:21.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.848 "strip_size_kb": 64, 00:16:21.848 "state": "configuring", 00:16:21.848 "raid_level": "raid5f", 00:16:21.848 "superblock": false, 00:16:21.848 "num_base_bdevs": 4, 00:16:21.848 "num_base_bdevs_discovered": 2, 00:16:21.848 "num_base_bdevs_operational": 4, 00:16:21.848 "base_bdevs_list": [ 00:16:21.848 { 00:16:21.848 "name": "BaseBdev1", 00:16:21.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.848 "is_configured": false, 00:16:21.848 "data_offset": 0, 00:16:21.848 "data_size": 0 00:16:21.848 }, 00:16:21.848 { 00:16:21.848 "name": null, 00:16:21.848 "uuid": "e025d29c-8cb2-42f4-a799-b065fb9cc313", 00:16:21.848 "is_configured": false, 00:16:21.848 "data_offset": 0, 00:16:21.848 "data_size": 65536 00:16:21.848 }, 00:16:21.848 { 00:16:21.848 "name": "BaseBdev3", 00:16:21.848 "uuid": "a5b0817c-6c97-4779-9591-d14ab1c7c295", 00:16:21.848 "is_configured": true, 00:16:21.848 "data_offset": 0, 00:16:21.848 "data_size": 65536 00:16:21.848 }, 00:16:21.848 { 00:16:21.848 "name": "BaseBdev4", 00:16:21.848 "uuid": "813703f8-6754-44c7-a7f4-016ec683e191", 00:16:21.848 "is_configured": true, 00:16:21.848 "data_offset": 0, 00:16:21.848 "data_size": 65536 00:16:21.848 } 00:16:21.848 ] 00:16:21.848 }' 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.848 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.107 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.107 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.107 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.107 07:48:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:22.107 07:48:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.107 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:22.107 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:22.107 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.107 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.367 [2024-11-29 07:48:12.057176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.367 BaseBdev1 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.367 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.367 [ 00:16:22.367 { 00:16:22.367 "name": "BaseBdev1", 00:16:22.367 "aliases": [ 00:16:22.367 "2a650b68-7b69-4af2-b127-b931612c5c24" 00:16:22.367 ], 00:16:22.367 "product_name": "Malloc disk", 00:16:22.367 "block_size": 512, 00:16:22.367 "num_blocks": 65536, 00:16:22.367 "uuid": "2a650b68-7b69-4af2-b127-b931612c5c24", 00:16:22.367 "assigned_rate_limits": { 00:16:22.367 "rw_ios_per_sec": 0, 00:16:22.367 "rw_mbytes_per_sec": 0, 00:16:22.367 "r_mbytes_per_sec": 0, 00:16:22.367 "w_mbytes_per_sec": 0 00:16:22.367 }, 00:16:22.367 "claimed": true, 00:16:22.367 "claim_type": "exclusive_write", 00:16:22.367 "zoned": false, 00:16:22.367 "supported_io_types": { 00:16:22.367 "read": true, 00:16:22.367 "write": true, 00:16:22.367 "unmap": true, 00:16:22.367 "flush": true, 00:16:22.367 "reset": true, 00:16:22.367 "nvme_admin": false, 00:16:22.368 "nvme_io": false, 00:16:22.368 "nvme_io_md": false, 00:16:22.368 "write_zeroes": true, 00:16:22.368 "zcopy": true, 00:16:22.368 "get_zone_info": false, 00:16:22.368 "zone_management": false, 00:16:22.368 "zone_append": false, 00:16:22.368 "compare": false, 00:16:22.368 "compare_and_write": false, 00:16:22.368 "abort": true, 00:16:22.368 "seek_hole": false, 00:16:22.368 "seek_data": false, 00:16:22.368 "copy": true, 00:16:22.368 "nvme_iov_md": false 00:16:22.368 }, 00:16:22.368 "memory_domains": [ 00:16:22.368 { 00:16:22.368 "dma_device_id": "system", 00:16:22.368 "dma_device_type": 1 00:16:22.368 }, 00:16:22.368 { 00:16:22.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.368 "dma_device_type": 2 00:16:22.368 } 00:16:22.368 ], 00:16:22.368 "driver_specific": {} 00:16:22.368 } 00:16:22.368 ] 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.368 "name": "Existed_Raid", 00:16:22.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.368 "strip_size_kb": 64, 00:16:22.368 "state": "configuring", 00:16:22.368 "raid_level": "raid5f", 00:16:22.368 "superblock": false, 00:16:22.368 "num_base_bdevs": 4, 00:16:22.368 "num_base_bdevs_discovered": 3, 00:16:22.368 "num_base_bdevs_operational": 4, 00:16:22.368 "base_bdevs_list": [ 00:16:22.368 { 00:16:22.368 "name": "BaseBdev1", 00:16:22.368 "uuid": "2a650b68-7b69-4af2-b127-b931612c5c24", 00:16:22.368 "is_configured": true, 00:16:22.368 "data_offset": 0, 00:16:22.368 "data_size": 65536 00:16:22.368 }, 00:16:22.368 { 00:16:22.368 "name": null, 00:16:22.368 "uuid": "e025d29c-8cb2-42f4-a799-b065fb9cc313", 00:16:22.368 "is_configured": false, 00:16:22.368 "data_offset": 0, 00:16:22.368 "data_size": 65536 00:16:22.368 }, 00:16:22.368 { 00:16:22.368 "name": "BaseBdev3", 00:16:22.368 "uuid": "a5b0817c-6c97-4779-9591-d14ab1c7c295", 00:16:22.368 "is_configured": true, 00:16:22.368 "data_offset": 0, 00:16:22.368 "data_size": 65536 00:16:22.368 }, 00:16:22.368 { 00:16:22.368 "name": "BaseBdev4", 00:16:22.368 "uuid": "813703f8-6754-44c7-a7f4-016ec683e191", 00:16:22.368 "is_configured": true, 00:16:22.368 "data_offset": 0, 00:16:22.368 "data_size": 65536 00:16:22.368 } 00:16:22.368 ] 00:16:22.368 }' 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.368 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.628 [2024-11-29 07:48:12.540360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.628 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.888 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.888 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.888 "name": "Existed_Raid", 00:16:22.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.888 "strip_size_kb": 64, 00:16:22.888 "state": "configuring", 00:16:22.888 "raid_level": "raid5f", 00:16:22.888 "superblock": false, 00:16:22.888 "num_base_bdevs": 4, 00:16:22.888 "num_base_bdevs_discovered": 2, 00:16:22.888 "num_base_bdevs_operational": 4, 00:16:22.888 "base_bdevs_list": [ 00:16:22.888 { 00:16:22.888 "name": "BaseBdev1", 00:16:22.888 "uuid": "2a650b68-7b69-4af2-b127-b931612c5c24", 00:16:22.888 "is_configured": true, 00:16:22.888 "data_offset": 0, 00:16:22.888 "data_size": 65536 00:16:22.888 }, 00:16:22.888 { 00:16:22.888 "name": null, 00:16:22.888 "uuid": "e025d29c-8cb2-42f4-a799-b065fb9cc313", 00:16:22.888 "is_configured": false, 00:16:22.888 "data_offset": 0, 00:16:22.888 "data_size": 65536 00:16:22.888 }, 00:16:22.888 { 00:16:22.888 "name": null, 00:16:22.888 "uuid": "a5b0817c-6c97-4779-9591-d14ab1c7c295", 00:16:22.888 "is_configured": false, 00:16:22.888 "data_offset": 0, 00:16:22.888 "data_size": 65536 00:16:22.888 }, 00:16:22.888 { 00:16:22.888 "name": "BaseBdev4", 00:16:22.888 "uuid": "813703f8-6754-44c7-a7f4-016ec683e191", 00:16:22.888 "is_configured": true, 00:16:22.888 "data_offset": 0, 00:16:22.888 "data_size": 65536 00:16:22.888 } 00:16:22.888 ] 00:16:22.888 }' 00:16:22.888 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.888 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.149 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.149 07:48:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.149 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.149 07:48:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.149 [2024-11-29 07:48:13.039508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.149 "name": "Existed_Raid", 00:16:23.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.149 "strip_size_kb": 64, 00:16:23.149 "state": "configuring", 00:16:23.149 "raid_level": "raid5f", 00:16:23.149 "superblock": false, 00:16:23.149 "num_base_bdevs": 4, 00:16:23.149 "num_base_bdevs_discovered": 3, 00:16:23.149 "num_base_bdevs_operational": 4, 00:16:23.149 "base_bdevs_list": [ 00:16:23.149 { 00:16:23.149 "name": "BaseBdev1", 00:16:23.149 "uuid": "2a650b68-7b69-4af2-b127-b931612c5c24", 00:16:23.149 "is_configured": true, 00:16:23.149 "data_offset": 0, 00:16:23.149 "data_size": 65536 00:16:23.149 }, 00:16:23.149 { 00:16:23.149 "name": null, 00:16:23.149 "uuid": "e025d29c-8cb2-42f4-a799-b065fb9cc313", 00:16:23.149 "is_configured": false, 00:16:23.149 "data_offset": 0, 00:16:23.149 "data_size": 65536 00:16:23.149 }, 00:16:23.149 { 00:16:23.149 "name": "BaseBdev3", 00:16:23.149 "uuid": "a5b0817c-6c97-4779-9591-d14ab1c7c295", 00:16:23.149 "is_configured": true, 00:16:23.149 "data_offset": 0, 00:16:23.149 "data_size": 65536 00:16:23.149 }, 00:16:23.149 { 00:16:23.149 "name": "BaseBdev4", 00:16:23.149 "uuid": "813703f8-6754-44c7-a7f4-016ec683e191", 00:16:23.149 "is_configured": true, 00:16:23.149 "data_offset": 0, 00:16:23.149 "data_size": 65536 00:16:23.149 } 00:16:23.149 ] 00:16:23.149 }' 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.149 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.719 [2024-11-29 07:48:13.482801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.719 "name": "Existed_Raid", 00:16:23.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.719 "strip_size_kb": 64, 00:16:23.719 "state": "configuring", 00:16:23.719 "raid_level": "raid5f", 00:16:23.719 "superblock": false, 00:16:23.719 "num_base_bdevs": 4, 00:16:23.719 "num_base_bdevs_discovered": 2, 00:16:23.719 "num_base_bdevs_operational": 4, 00:16:23.719 "base_bdevs_list": [ 00:16:23.719 { 00:16:23.719 "name": null, 00:16:23.719 "uuid": "2a650b68-7b69-4af2-b127-b931612c5c24", 00:16:23.719 "is_configured": false, 00:16:23.719 "data_offset": 0, 00:16:23.719 "data_size": 65536 00:16:23.719 }, 00:16:23.719 { 00:16:23.719 "name": null, 00:16:23.719 "uuid": "e025d29c-8cb2-42f4-a799-b065fb9cc313", 00:16:23.719 "is_configured": false, 00:16:23.719 "data_offset": 0, 00:16:23.719 "data_size": 65536 00:16:23.719 }, 00:16:23.719 { 00:16:23.719 "name": "BaseBdev3", 00:16:23.719 "uuid": "a5b0817c-6c97-4779-9591-d14ab1c7c295", 00:16:23.719 "is_configured": true, 00:16:23.719 "data_offset": 0, 00:16:23.719 "data_size": 65536 00:16:23.719 }, 00:16:23.719 { 00:16:23.719 "name": "BaseBdev4", 00:16:23.719 "uuid": "813703f8-6754-44c7-a7f4-016ec683e191", 00:16:23.719 "is_configured": true, 00:16:23.719 "data_offset": 0, 00:16:23.719 "data_size": 65536 00:16:23.719 } 00:16:23.719 ] 00:16:23.719 }' 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.719 07:48:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.289 [2024-11-29 07:48:14.064879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.289 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.289 "name": "Existed_Raid", 00:16:24.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.289 "strip_size_kb": 64, 00:16:24.289 "state": "configuring", 00:16:24.289 "raid_level": "raid5f", 00:16:24.289 "superblock": false, 00:16:24.289 "num_base_bdevs": 4, 00:16:24.289 "num_base_bdevs_discovered": 3, 00:16:24.289 "num_base_bdevs_operational": 4, 00:16:24.289 "base_bdevs_list": [ 00:16:24.289 { 00:16:24.289 "name": null, 00:16:24.289 "uuid": "2a650b68-7b69-4af2-b127-b931612c5c24", 00:16:24.289 "is_configured": false, 00:16:24.289 "data_offset": 0, 00:16:24.289 "data_size": 65536 00:16:24.289 }, 00:16:24.289 { 00:16:24.289 "name": "BaseBdev2", 00:16:24.289 "uuid": "e025d29c-8cb2-42f4-a799-b065fb9cc313", 00:16:24.289 "is_configured": true, 00:16:24.289 "data_offset": 0, 00:16:24.289 "data_size": 65536 00:16:24.289 }, 00:16:24.289 { 00:16:24.289 "name": "BaseBdev3", 00:16:24.289 "uuid": "a5b0817c-6c97-4779-9591-d14ab1c7c295", 00:16:24.289 "is_configured": true, 00:16:24.289 "data_offset": 0, 00:16:24.289 "data_size": 65536 00:16:24.289 }, 00:16:24.289 { 00:16:24.289 "name": "BaseBdev4", 00:16:24.289 "uuid": "813703f8-6754-44c7-a7f4-016ec683e191", 00:16:24.289 "is_configured": true, 00:16:24.289 "data_offset": 0, 00:16:24.289 "data_size": 65536 00:16:24.289 } 00:16:24.290 ] 00:16:24.290 }' 00:16:24.290 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.290 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.549 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.549 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:24.549 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.549 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2a650b68-7b69-4af2-b127-b931612c5c24 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.809 [2024-11-29 07:48:14.584023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:24.809 [2024-11-29 07:48:14.584071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:24.809 [2024-11-29 07:48:14.584079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:24.809 [2024-11-29 07:48:14.584363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:24.809 [2024-11-29 07:48:14.591147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:24.809 [2024-11-29 07:48:14.591204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:24.809 [2024-11-29 07:48:14.591517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.809 NewBaseBdev 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.809 [ 00:16:24.809 { 00:16:24.809 "name": "NewBaseBdev", 00:16:24.809 "aliases": [ 00:16:24.809 "2a650b68-7b69-4af2-b127-b931612c5c24" 00:16:24.809 ], 00:16:24.809 "product_name": "Malloc disk", 00:16:24.809 "block_size": 512, 00:16:24.809 "num_blocks": 65536, 00:16:24.809 "uuid": "2a650b68-7b69-4af2-b127-b931612c5c24", 00:16:24.809 "assigned_rate_limits": { 00:16:24.809 "rw_ios_per_sec": 0, 00:16:24.809 "rw_mbytes_per_sec": 0, 00:16:24.809 "r_mbytes_per_sec": 0, 00:16:24.809 "w_mbytes_per_sec": 0 00:16:24.809 }, 00:16:24.809 "claimed": true, 00:16:24.809 "claim_type": "exclusive_write", 00:16:24.809 "zoned": false, 00:16:24.809 "supported_io_types": { 00:16:24.809 "read": true, 00:16:24.809 "write": true, 00:16:24.809 "unmap": true, 00:16:24.809 "flush": true, 00:16:24.809 "reset": true, 00:16:24.809 "nvme_admin": false, 00:16:24.809 "nvme_io": false, 00:16:24.809 "nvme_io_md": false, 00:16:24.809 "write_zeroes": true, 00:16:24.809 "zcopy": true, 00:16:24.809 "get_zone_info": false, 00:16:24.809 "zone_management": false, 00:16:24.809 "zone_append": false, 00:16:24.809 "compare": false, 00:16:24.809 "compare_and_write": false, 00:16:24.809 "abort": true, 00:16:24.809 "seek_hole": false, 00:16:24.809 "seek_data": false, 00:16:24.809 "copy": true, 00:16:24.809 "nvme_iov_md": false 00:16:24.809 }, 00:16:24.809 "memory_domains": [ 00:16:24.809 { 00:16:24.809 "dma_device_id": "system", 00:16:24.809 "dma_device_type": 1 00:16:24.809 }, 00:16:24.809 { 00:16:24.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.809 "dma_device_type": 2 00:16:24.809 } 00:16:24.809 ], 00:16:24.809 "driver_specific": {} 00:16:24.809 } 00:16:24.809 ] 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.809 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.810 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.810 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.810 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.810 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.810 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.810 "name": "Existed_Raid", 00:16:24.810 "uuid": "9fa0a542-084d-4e73-a71e-179ef82b319d", 00:16:24.810 "strip_size_kb": 64, 00:16:24.810 "state": "online", 00:16:24.810 "raid_level": "raid5f", 00:16:24.810 "superblock": false, 00:16:24.810 "num_base_bdevs": 4, 00:16:24.810 "num_base_bdevs_discovered": 4, 00:16:24.810 "num_base_bdevs_operational": 4, 00:16:24.810 "base_bdevs_list": [ 00:16:24.810 { 00:16:24.810 "name": "NewBaseBdev", 00:16:24.810 "uuid": "2a650b68-7b69-4af2-b127-b931612c5c24", 00:16:24.810 "is_configured": true, 00:16:24.810 "data_offset": 0, 00:16:24.810 "data_size": 65536 00:16:24.810 }, 00:16:24.810 { 00:16:24.810 "name": "BaseBdev2", 00:16:24.810 "uuid": "e025d29c-8cb2-42f4-a799-b065fb9cc313", 00:16:24.810 "is_configured": true, 00:16:24.810 "data_offset": 0, 00:16:24.810 "data_size": 65536 00:16:24.810 }, 00:16:24.810 { 00:16:24.810 "name": "BaseBdev3", 00:16:24.810 "uuid": "a5b0817c-6c97-4779-9591-d14ab1c7c295", 00:16:24.810 "is_configured": true, 00:16:24.810 "data_offset": 0, 00:16:24.810 "data_size": 65536 00:16:24.810 }, 00:16:24.810 { 00:16:24.810 "name": "BaseBdev4", 00:16:24.810 "uuid": "813703f8-6754-44c7-a7f4-016ec683e191", 00:16:24.810 "is_configured": true, 00:16:24.810 "data_offset": 0, 00:16:24.810 "data_size": 65536 00:16:24.810 } 00:16:24.810 ] 00:16:24.810 }' 00:16:24.810 07:48:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.810 07:48:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.378 [2024-11-29 07:48:15.063094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:25.378 "name": "Existed_Raid", 00:16:25.378 "aliases": [ 00:16:25.378 "9fa0a542-084d-4e73-a71e-179ef82b319d" 00:16:25.378 ], 00:16:25.378 "product_name": "Raid Volume", 00:16:25.378 "block_size": 512, 00:16:25.378 "num_blocks": 196608, 00:16:25.378 "uuid": "9fa0a542-084d-4e73-a71e-179ef82b319d", 00:16:25.378 "assigned_rate_limits": { 00:16:25.378 "rw_ios_per_sec": 0, 00:16:25.378 "rw_mbytes_per_sec": 0, 00:16:25.378 "r_mbytes_per_sec": 0, 00:16:25.378 "w_mbytes_per_sec": 0 00:16:25.378 }, 00:16:25.378 "claimed": false, 00:16:25.378 "zoned": false, 00:16:25.378 "supported_io_types": { 00:16:25.378 "read": true, 00:16:25.378 "write": true, 00:16:25.378 "unmap": false, 00:16:25.378 "flush": false, 00:16:25.378 "reset": true, 00:16:25.378 "nvme_admin": false, 00:16:25.378 "nvme_io": false, 00:16:25.378 "nvme_io_md": false, 00:16:25.378 "write_zeroes": true, 00:16:25.378 "zcopy": false, 00:16:25.378 "get_zone_info": false, 00:16:25.378 "zone_management": false, 00:16:25.378 "zone_append": false, 00:16:25.378 "compare": false, 00:16:25.378 "compare_and_write": false, 00:16:25.378 "abort": false, 00:16:25.378 "seek_hole": false, 00:16:25.378 "seek_data": false, 00:16:25.378 "copy": false, 00:16:25.378 "nvme_iov_md": false 00:16:25.378 }, 00:16:25.378 "driver_specific": { 00:16:25.378 "raid": { 00:16:25.378 "uuid": "9fa0a542-084d-4e73-a71e-179ef82b319d", 00:16:25.378 "strip_size_kb": 64, 00:16:25.378 "state": "online", 00:16:25.378 "raid_level": "raid5f", 00:16:25.378 "superblock": false, 00:16:25.378 "num_base_bdevs": 4, 00:16:25.378 "num_base_bdevs_discovered": 4, 00:16:25.378 "num_base_bdevs_operational": 4, 00:16:25.378 "base_bdevs_list": [ 00:16:25.378 { 00:16:25.378 "name": "NewBaseBdev", 00:16:25.378 "uuid": "2a650b68-7b69-4af2-b127-b931612c5c24", 00:16:25.378 "is_configured": true, 00:16:25.378 "data_offset": 0, 00:16:25.378 "data_size": 65536 00:16:25.378 }, 00:16:25.378 { 00:16:25.378 "name": "BaseBdev2", 00:16:25.378 "uuid": "e025d29c-8cb2-42f4-a799-b065fb9cc313", 00:16:25.378 "is_configured": true, 00:16:25.378 "data_offset": 0, 00:16:25.378 "data_size": 65536 00:16:25.378 }, 00:16:25.378 { 00:16:25.378 "name": "BaseBdev3", 00:16:25.378 "uuid": "a5b0817c-6c97-4779-9591-d14ab1c7c295", 00:16:25.378 "is_configured": true, 00:16:25.378 "data_offset": 0, 00:16:25.378 "data_size": 65536 00:16:25.378 }, 00:16:25.378 { 00:16:25.378 "name": "BaseBdev4", 00:16:25.378 "uuid": "813703f8-6754-44c7-a7f4-016ec683e191", 00:16:25.378 "is_configured": true, 00:16:25.378 "data_offset": 0, 00:16:25.378 "data_size": 65536 00:16:25.378 } 00:16:25.378 ] 00:16:25.378 } 00:16:25.378 } 00:16:25.378 }' 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:25.378 BaseBdev2 00:16:25.378 BaseBdev3 00:16:25.378 BaseBdev4' 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.378 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.638 [2024-11-29 07:48:15.370358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.638 [2024-11-29 07:48:15.370427] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.638 [2024-11-29 07:48:15.370499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.638 [2024-11-29 07:48:15.370776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.638 [2024-11-29 07:48:15.370786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82454 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82454 ']' 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82454 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82454 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.638 killing process with pid 82454 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82454' 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82454 00:16:25.638 [2024-11-29 07:48:15.419665] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.638 07:48:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82454 00:16:25.897 [2024-11-29 07:48:15.794041] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.279 ************************************ 00:16:27.279 END TEST raid5f_state_function_test 00:16:27.279 ************************************ 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:27.279 00:16:27.279 real 0m11.225s 00:16:27.279 user 0m17.850s 00:16:27.279 sys 0m2.023s 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.279 07:48:16 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:27.279 07:48:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:27.279 07:48:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.279 07:48:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:27.279 ************************************ 00:16:27.279 START TEST raid5f_state_function_test_sb 00:16:27.279 ************************************ 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:27.279 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:27.280 Process raid pid: 83120 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83120 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83120' 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83120 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83120 ']' 00:16:27.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.280 07:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.280 [2024-11-29 07:48:17.051156] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:27.280 [2024-11-29 07:48:17.051285] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.540 [2024-11-29 07:48:17.231293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.540 [2024-11-29 07:48:17.336762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.799 [2024-11-29 07:48:17.535713] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.799 [2024-11-29 07:48:17.535798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.059 [2024-11-29 07:48:17.850360] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.059 [2024-11-29 07:48:17.850469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.059 [2024-11-29 07:48:17.850483] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.059 [2024-11-29 07:48:17.850493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.059 [2024-11-29 07:48:17.850499] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.059 [2024-11-29 07:48:17.850507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.059 [2024-11-29 07:48:17.850513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:28.059 [2024-11-29 07:48:17.850521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.059 "name": "Existed_Raid", 00:16:28.059 "uuid": "fae41ea7-b304-49db-a3ad-daae41e3fcc1", 00:16:28.059 "strip_size_kb": 64, 00:16:28.059 "state": "configuring", 00:16:28.059 "raid_level": "raid5f", 00:16:28.059 "superblock": true, 00:16:28.059 "num_base_bdevs": 4, 00:16:28.059 "num_base_bdevs_discovered": 0, 00:16:28.059 "num_base_bdevs_operational": 4, 00:16:28.059 "base_bdevs_list": [ 00:16:28.059 { 00:16:28.059 "name": "BaseBdev1", 00:16:28.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.059 "is_configured": false, 00:16:28.059 "data_offset": 0, 00:16:28.059 "data_size": 0 00:16:28.059 }, 00:16:28.059 { 00:16:28.059 "name": "BaseBdev2", 00:16:28.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.059 "is_configured": false, 00:16:28.059 "data_offset": 0, 00:16:28.059 "data_size": 0 00:16:28.059 }, 00:16:28.059 { 00:16:28.059 "name": "BaseBdev3", 00:16:28.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.059 "is_configured": false, 00:16:28.059 "data_offset": 0, 00:16:28.059 "data_size": 0 00:16:28.059 }, 00:16:28.059 { 00:16:28.059 "name": "BaseBdev4", 00:16:28.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.059 "is_configured": false, 00:16:28.059 "data_offset": 0, 00:16:28.059 "data_size": 0 00:16:28.059 } 00:16:28.059 ] 00:16:28.059 }' 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.059 07:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.645 [2024-11-29 07:48:18.285526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.645 [2024-11-29 07:48:18.285607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.645 [2024-11-29 07:48:18.297520] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.645 [2024-11-29 07:48:18.297591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.645 [2024-11-29 07:48:18.297618] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.645 [2024-11-29 07:48:18.297640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.645 [2024-11-29 07:48:18.297657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.645 [2024-11-29 07:48:18.297677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.645 [2024-11-29 07:48:18.297693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:28.645 [2024-11-29 07:48:18.297713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.645 [2024-11-29 07:48:18.342834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.645 BaseBdev1 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.645 [ 00:16:28.645 { 00:16:28.645 "name": "BaseBdev1", 00:16:28.645 "aliases": [ 00:16:28.645 "f273471f-6c52-49e8-9aef-5552984ba436" 00:16:28.645 ], 00:16:28.645 "product_name": "Malloc disk", 00:16:28.645 "block_size": 512, 00:16:28.645 "num_blocks": 65536, 00:16:28.645 "uuid": "f273471f-6c52-49e8-9aef-5552984ba436", 00:16:28.645 "assigned_rate_limits": { 00:16:28.645 "rw_ios_per_sec": 0, 00:16:28.645 "rw_mbytes_per_sec": 0, 00:16:28.645 "r_mbytes_per_sec": 0, 00:16:28.645 "w_mbytes_per_sec": 0 00:16:28.645 }, 00:16:28.645 "claimed": true, 00:16:28.645 "claim_type": "exclusive_write", 00:16:28.645 "zoned": false, 00:16:28.645 "supported_io_types": { 00:16:28.645 "read": true, 00:16:28.645 "write": true, 00:16:28.645 "unmap": true, 00:16:28.645 "flush": true, 00:16:28.645 "reset": true, 00:16:28.645 "nvme_admin": false, 00:16:28.645 "nvme_io": false, 00:16:28.645 "nvme_io_md": false, 00:16:28.645 "write_zeroes": true, 00:16:28.645 "zcopy": true, 00:16:28.645 "get_zone_info": false, 00:16:28.645 "zone_management": false, 00:16:28.645 "zone_append": false, 00:16:28.645 "compare": false, 00:16:28.645 "compare_and_write": false, 00:16:28.645 "abort": true, 00:16:28.645 "seek_hole": false, 00:16:28.645 "seek_data": false, 00:16:28.645 "copy": true, 00:16:28.645 "nvme_iov_md": false 00:16:28.645 }, 00:16:28.645 "memory_domains": [ 00:16:28.645 { 00:16:28.645 "dma_device_id": "system", 00:16:28.645 "dma_device_type": 1 00:16:28.645 }, 00:16:28.645 { 00:16:28.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.645 "dma_device_type": 2 00:16:28.645 } 00:16:28.645 ], 00:16:28.645 "driver_specific": {} 00:16:28.645 } 00:16:28.645 ] 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:28.645 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.646 "name": "Existed_Raid", 00:16:28.646 "uuid": "41459271-3ce8-4787-b5a0-36a0c7350f01", 00:16:28.646 "strip_size_kb": 64, 00:16:28.646 "state": "configuring", 00:16:28.646 "raid_level": "raid5f", 00:16:28.646 "superblock": true, 00:16:28.646 "num_base_bdevs": 4, 00:16:28.646 "num_base_bdevs_discovered": 1, 00:16:28.646 "num_base_bdevs_operational": 4, 00:16:28.646 "base_bdevs_list": [ 00:16:28.646 { 00:16:28.646 "name": "BaseBdev1", 00:16:28.646 "uuid": "f273471f-6c52-49e8-9aef-5552984ba436", 00:16:28.646 "is_configured": true, 00:16:28.646 "data_offset": 2048, 00:16:28.646 "data_size": 63488 00:16:28.646 }, 00:16:28.646 { 00:16:28.646 "name": "BaseBdev2", 00:16:28.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.646 "is_configured": false, 00:16:28.646 "data_offset": 0, 00:16:28.646 "data_size": 0 00:16:28.646 }, 00:16:28.646 { 00:16:28.646 "name": "BaseBdev3", 00:16:28.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.646 "is_configured": false, 00:16:28.646 "data_offset": 0, 00:16:28.646 "data_size": 0 00:16:28.646 }, 00:16:28.646 { 00:16:28.646 "name": "BaseBdev4", 00:16:28.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.646 "is_configured": false, 00:16:28.646 "data_offset": 0, 00:16:28.646 "data_size": 0 00:16:28.646 } 00:16:28.646 ] 00:16:28.646 }' 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.646 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.905 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:28.905 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.905 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.905 [2024-11-29 07:48:18.830057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.905 [2024-11-29 07:48:18.830173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:28.905 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.905 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:28.905 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.905 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.905 [2024-11-29 07:48:18.842089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.905 [2024-11-29 07:48:18.843947] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.905 [2024-11-29 07:48:18.844023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.905 [2024-11-29 07:48:18.844051] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.905 [2024-11-29 07:48:18.844076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.905 [2024-11-29 07:48:18.844094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:28.905 [2024-11-29 07:48:18.844124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:28.905 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.164 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.165 "name": "Existed_Raid", 00:16:29.165 "uuid": "8913d988-76c3-4c1d-98e6-53de27c6d4a9", 00:16:29.165 "strip_size_kb": 64, 00:16:29.165 "state": "configuring", 00:16:29.165 "raid_level": "raid5f", 00:16:29.165 "superblock": true, 00:16:29.165 "num_base_bdevs": 4, 00:16:29.165 "num_base_bdevs_discovered": 1, 00:16:29.165 "num_base_bdevs_operational": 4, 00:16:29.165 "base_bdevs_list": [ 00:16:29.165 { 00:16:29.165 "name": "BaseBdev1", 00:16:29.165 "uuid": "f273471f-6c52-49e8-9aef-5552984ba436", 00:16:29.165 "is_configured": true, 00:16:29.165 "data_offset": 2048, 00:16:29.165 "data_size": 63488 00:16:29.165 }, 00:16:29.165 { 00:16:29.165 "name": "BaseBdev2", 00:16:29.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.165 "is_configured": false, 00:16:29.165 "data_offset": 0, 00:16:29.165 "data_size": 0 00:16:29.165 }, 00:16:29.165 { 00:16:29.165 "name": "BaseBdev3", 00:16:29.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.165 "is_configured": false, 00:16:29.165 "data_offset": 0, 00:16:29.165 "data_size": 0 00:16:29.165 }, 00:16:29.165 { 00:16:29.165 "name": "BaseBdev4", 00:16:29.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.165 "is_configured": false, 00:16:29.165 "data_offset": 0, 00:16:29.165 "data_size": 0 00:16:29.165 } 00:16:29.165 ] 00:16:29.165 }' 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.165 07:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.425 [2024-11-29 07:48:19.306888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.425 BaseBdev2 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.425 [ 00:16:29.425 { 00:16:29.425 "name": "BaseBdev2", 00:16:29.425 "aliases": [ 00:16:29.425 "d9982715-2e55-4cc1-aa77-d2f82001aafb" 00:16:29.425 ], 00:16:29.425 "product_name": "Malloc disk", 00:16:29.425 "block_size": 512, 00:16:29.425 "num_blocks": 65536, 00:16:29.425 "uuid": "d9982715-2e55-4cc1-aa77-d2f82001aafb", 00:16:29.425 "assigned_rate_limits": { 00:16:29.425 "rw_ios_per_sec": 0, 00:16:29.425 "rw_mbytes_per_sec": 0, 00:16:29.425 "r_mbytes_per_sec": 0, 00:16:29.425 "w_mbytes_per_sec": 0 00:16:29.425 }, 00:16:29.425 "claimed": true, 00:16:29.425 "claim_type": "exclusive_write", 00:16:29.425 "zoned": false, 00:16:29.425 "supported_io_types": { 00:16:29.425 "read": true, 00:16:29.425 "write": true, 00:16:29.425 "unmap": true, 00:16:29.425 "flush": true, 00:16:29.425 "reset": true, 00:16:29.425 "nvme_admin": false, 00:16:29.425 "nvme_io": false, 00:16:29.425 "nvme_io_md": false, 00:16:29.425 "write_zeroes": true, 00:16:29.425 "zcopy": true, 00:16:29.425 "get_zone_info": false, 00:16:29.425 "zone_management": false, 00:16:29.425 "zone_append": false, 00:16:29.425 "compare": false, 00:16:29.425 "compare_and_write": false, 00:16:29.425 "abort": true, 00:16:29.425 "seek_hole": false, 00:16:29.425 "seek_data": false, 00:16:29.425 "copy": true, 00:16:29.425 "nvme_iov_md": false 00:16:29.425 }, 00:16:29.425 "memory_domains": [ 00:16:29.425 { 00:16:29.425 "dma_device_id": "system", 00:16:29.425 "dma_device_type": 1 00:16:29.425 }, 00:16:29.425 { 00:16:29.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.425 "dma_device_type": 2 00:16:29.425 } 00:16:29.425 ], 00:16:29.425 "driver_specific": {} 00:16:29.425 } 00:16:29.425 ] 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.425 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.684 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.684 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.684 "name": "Existed_Raid", 00:16:29.685 "uuid": "8913d988-76c3-4c1d-98e6-53de27c6d4a9", 00:16:29.685 "strip_size_kb": 64, 00:16:29.685 "state": "configuring", 00:16:29.685 "raid_level": "raid5f", 00:16:29.685 "superblock": true, 00:16:29.685 "num_base_bdevs": 4, 00:16:29.685 "num_base_bdevs_discovered": 2, 00:16:29.685 "num_base_bdevs_operational": 4, 00:16:29.685 "base_bdevs_list": [ 00:16:29.685 { 00:16:29.685 "name": "BaseBdev1", 00:16:29.685 "uuid": "f273471f-6c52-49e8-9aef-5552984ba436", 00:16:29.685 "is_configured": true, 00:16:29.685 "data_offset": 2048, 00:16:29.685 "data_size": 63488 00:16:29.685 }, 00:16:29.685 { 00:16:29.685 "name": "BaseBdev2", 00:16:29.685 "uuid": "d9982715-2e55-4cc1-aa77-d2f82001aafb", 00:16:29.685 "is_configured": true, 00:16:29.685 "data_offset": 2048, 00:16:29.685 "data_size": 63488 00:16:29.685 }, 00:16:29.685 { 00:16:29.685 "name": "BaseBdev3", 00:16:29.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.685 "is_configured": false, 00:16:29.685 "data_offset": 0, 00:16:29.685 "data_size": 0 00:16:29.685 }, 00:16:29.685 { 00:16:29.685 "name": "BaseBdev4", 00:16:29.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.685 "is_configured": false, 00:16:29.685 "data_offset": 0, 00:16:29.685 "data_size": 0 00:16:29.685 } 00:16:29.685 ] 00:16:29.685 }' 00:16:29.685 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.685 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.944 [2024-11-29 07:48:19.825006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.944 BaseBdev3 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.944 [ 00:16:29.944 { 00:16:29.944 "name": "BaseBdev3", 00:16:29.944 "aliases": [ 00:16:29.944 "5e217cfb-c788-4148-b4cf-b5924ee10464" 00:16:29.944 ], 00:16:29.944 "product_name": "Malloc disk", 00:16:29.944 "block_size": 512, 00:16:29.944 "num_blocks": 65536, 00:16:29.944 "uuid": "5e217cfb-c788-4148-b4cf-b5924ee10464", 00:16:29.944 "assigned_rate_limits": { 00:16:29.944 "rw_ios_per_sec": 0, 00:16:29.944 "rw_mbytes_per_sec": 0, 00:16:29.944 "r_mbytes_per_sec": 0, 00:16:29.944 "w_mbytes_per_sec": 0 00:16:29.944 }, 00:16:29.944 "claimed": true, 00:16:29.944 "claim_type": "exclusive_write", 00:16:29.944 "zoned": false, 00:16:29.944 "supported_io_types": { 00:16:29.944 "read": true, 00:16:29.944 "write": true, 00:16:29.944 "unmap": true, 00:16:29.944 "flush": true, 00:16:29.944 "reset": true, 00:16:29.944 "nvme_admin": false, 00:16:29.944 "nvme_io": false, 00:16:29.944 "nvme_io_md": false, 00:16:29.944 "write_zeroes": true, 00:16:29.944 "zcopy": true, 00:16:29.944 "get_zone_info": false, 00:16:29.944 "zone_management": false, 00:16:29.944 "zone_append": false, 00:16:29.944 "compare": false, 00:16:29.944 "compare_and_write": false, 00:16:29.944 "abort": true, 00:16:29.944 "seek_hole": false, 00:16:29.944 "seek_data": false, 00:16:29.944 "copy": true, 00:16:29.944 "nvme_iov_md": false 00:16:29.944 }, 00:16:29.944 "memory_domains": [ 00:16:29.944 { 00:16:29.944 "dma_device_id": "system", 00:16:29.944 "dma_device_type": 1 00:16:29.944 }, 00:16:29.944 { 00:16:29.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.944 "dma_device_type": 2 00:16:29.944 } 00:16:29.944 ], 00:16:29.944 "driver_specific": {} 00:16:29.944 } 00:16:29.944 ] 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.944 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.204 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.204 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.204 "name": "Existed_Raid", 00:16:30.204 "uuid": "8913d988-76c3-4c1d-98e6-53de27c6d4a9", 00:16:30.204 "strip_size_kb": 64, 00:16:30.204 "state": "configuring", 00:16:30.204 "raid_level": "raid5f", 00:16:30.204 "superblock": true, 00:16:30.204 "num_base_bdevs": 4, 00:16:30.204 "num_base_bdevs_discovered": 3, 00:16:30.204 "num_base_bdevs_operational": 4, 00:16:30.204 "base_bdevs_list": [ 00:16:30.204 { 00:16:30.204 "name": "BaseBdev1", 00:16:30.204 "uuid": "f273471f-6c52-49e8-9aef-5552984ba436", 00:16:30.204 "is_configured": true, 00:16:30.204 "data_offset": 2048, 00:16:30.204 "data_size": 63488 00:16:30.204 }, 00:16:30.204 { 00:16:30.204 "name": "BaseBdev2", 00:16:30.204 "uuid": "d9982715-2e55-4cc1-aa77-d2f82001aafb", 00:16:30.204 "is_configured": true, 00:16:30.204 "data_offset": 2048, 00:16:30.204 "data_size": 63488 00:16:30.204 }, 00:16:30.204 { 00:16:30.204 "name": "BaseBdev3", 00:16:30.204 "uuid": "5e217cfb-c788-4148-b4cf-b5924ee10464", 00:16:30.204 "is_configured": true, 00:16:30.204 "data_offset": 2048, 00:16:30.204 "data_size": 63488 00:16:30.204 }, 00:16:30.204 { 00:16:30.204 "name": "BaseBdev4", 00:16:30.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.204 "is_configured": false, 00:16:30.204 "data_offset": 0, 00:16:30.204 "data_size": 0 00:16:30.204 } 00:16:30.204 ] 00:16:30.204 }' 00:16:30.204 07:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.204 07:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.463 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:30.463 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.463 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.463 [2024-11-29 07:48:20.301058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:30.463 [2024-11-29 07:48:20.301394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:30.463 [2024-11-29 07:48:20.301410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:30.463 [2024-11-29 07:48:20.301667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:30.463 BaseBdev4 00:16:30.463 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.463 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:30.463 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:30.463 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:30.463 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.464 [2024-11-29 07:48:20.308560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:30.464 [2024-11-29 07:48:20.308627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:30.464 [2024-11-29 07:48:20.308923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.464 [ 00:16:30.464 { 00:16:30.464 "name": "BaseBdev4", 00:16:30.464 "aliases": [ 00:16:30.464 "85e0ba00-79e3-4013-98ec-9c919e446f33" 00:16:30.464 ], 00:16:30.464 "product_name": "Malloc disk", 00:16:30.464 "block_size": 512, 00:16:30.464 "num_blocks": 65536, 00:16:30.464 "uuid": "85e0ba00-79e3-4013-98ec-9c919e446f33", 00:16:30.464 "assigned_rate_limits": { 00:16:30.464 "rw_ios_per_sec": 0, 00:16:30.464 "rw_mbytes_per_sec": 0, 00:16:30.464 "r_mbytes_per_sec": 0, 00:16:30.464 "w_mbytes_per_sec": 0 00:16:30.464 }, 00:16:30.464 "claimed": true, 00:16:30.464 "claim_type": "exclusive_write", 00:16:30.464 "zoned": false, 00:16:30.464 "supported_io_types": { 00:16:30.464 "read": true, 00:16:30.464 "write": true, 00:16:30.464 "unmap": true, 00:16:30.464 "flush": true, 00:16:30.464 "reset": true, 00:16:30.464 "nvme_admin": false, 00:16:30.464 "nvme_io": false, 00:16:30.464 "nvme_io_md": false, 00:16:30.464 "write_zeroes": true, 00:16:30.464 "zcopy": true, 00:16:30.464 "get_zone_info": false, 00:16:30.464 "zone_management": false, 00:16:30.464 "zone_append": false, 00:16:30.464 "compare": false, 00:16:30.464 "compare_and_write": false, 00:16:30.464 "abort": true, 00:16:30.464 "seek_hole": false, 00:16:30.464 "seek_data": false, 00:16:30.464 "copy": true, 00:16:30.464 "nvme_iov_md": false 00:16:30.464 }, 00:16:30.464 "memory_domains": [ 00:16:30.464 { 00:16:30.464 "dma_device_id": "system", 00:16:30.464 "dma_device_type": 1 00:16:30.464 }, 00:16:30.464 { 00:16:30.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.464 "dma_device_type": 2 00:16:30.464 } 00:16:30.464 ], 00:16:30.464 "driver_specific": {} 00:16:30.464 } 00:16:30.464 ] 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.464 "name": "Existed_Raid", 00:16:30.464 "uuid": "8913d988-76c3-4c1d-98e6-53de27c6d4a9", 00:16:30.464 "strip_size_kb": 64, 00:16:30.464 "state": "online", 00:16:30.464 "raid_level": "raid5f", 00:16:30.464 "superblock": true, 00:16:30.464 "num_base_bdevs": 4, 00:16:30.464 "num_base_bdevs_discovered": 4, 00:16:30.464 "num_base_bdevs_operational": 4, 00:16:30.464 "base_bdevs_list": [ 00:16:30.464 { 00:16:30.464 "name": "BaseBdev1", 00:16:30.464 "uuid": "f273471f-6c52-49e8-9aef-5552984ba436", 00:16:30.464 "is_configured": true, 00:16:30.464 "data_offset": 2048, 00:16:30.464 "data_size": 63488 00:16:30.464 }, 00:16:30.464 { 00:16:30.464 "name": "BaseBdev2", 00:16:30.464 "uuid": "d9982715-2e55-4cc1-aa77-d2f82001aafb", 00:16:30.464 "is_configured": true, 00:16:30.464 "data_offset": 2048, 00:16:30.464 "data_size": 63488 00:16:30.464 }, 00:16:30.464 { 00:16:30.464 "name": "BaseBdev3", 00:16:30.464 "uuid": "5e217cfb-c788-4148-b4cf-b5924ee10464", 00:16:30.464 "is_configured": true, 00:16:30.464 "data_offset": 2048, 00:16:30.464 "data_size": 63488 00:16:30.464 }, 00:16:30.464 { 00:16:30.464 "name": "BaseBdev4", 00:16:30.464 "uuid": "85e0ba00-79e3-4013-98ec-9c919e446f33", 00:16:30.464 "is_configured": true, 00:16:30.464 "data_offset": 2048, 00:16:30.464 "data_size": 63488 00:16:30.464 } 00:16:30.464 ] 00:16:30.464 }' 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.464 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.033 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:31.033 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:31.033 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:31.033 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:31.033 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:31.033 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:31.033 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:31.033 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.033 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.033 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:31.033 [2024-11-29 07:48:20.820134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.033 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.033 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:31.033 "name": "Existed_Raid", 00:16:31.033 "aliases": [ 00:16:31.033 "8913d988-76c3-4c1d-98e6-53de27c6d4a9" 00:16:31.033 ], 00:16:31.033 "product_name": "Raid Volume", 00:16:31.033 "block_size": 512, 00:16:31.033 "num_blocks": 190464, 00:16:31.033 "uuid": "8913d988-76c3-4c1d-98e6-53de27c6d4a9", 00:16:31.033 "assigned_rate_limits": { 00:16:31.033 "rw_ios_per_sec": 0, 00:16:31.033 "rw_mbytes_per_sec": 0, 00:16:31.033 "r_mbytes_per_sec": 0, 00:16:31.033 "w_mbytes_per_sec": 0 00:16:31.033 }, 00:16:31.033 "claimed": false, 00:16:31.033 "zoned": false, 00:16:31.033 "supported_io_types": { 00:16:31.034 "read": true, 00:16:31.034 "write": true, 00:16:31.034 "unmap": false, 00:16:31.034 "flush": false, 00:16:31.034 "reset": true, 00:16:31.034 "nvme_admin": false, 00:16:31.034 "nvme_io": false, 00:16:31.034 "nvme_io_md": false, 00:16:31.034 "write_zeroes": true, 00:16:31.034 "zcopy": false, 00:16:31.034 "get_zone_info": false, 00:16:31.034 "zone_management": false, 00:16:31.034 "zone_append": false, 00:16:31.034 "compare": false, 00:16:31.034 "compare_and_write": false, 00:16:31.034 "abort": false, 00:16:31.034 "seek_hole": false, 00:16:31.034 "seek_data": false, 00:16:31.034 "copy": false, 00:16:31.034 "nvme_iov_md": false 00:16:31.034 }, 00:16:31.034 "driver_specific": { 00:16:31.034 "raid": { 00:16:31.034 "uuid": "8913d988-76c3-4c1d-98e6-53de27c6d4a9", 00:16:31.034 "strip_size_kb": 64, 00:16:31.034 "state": "online", 00:16:31.034 "raid_level": "raid5f", 00:16:31.034 "superblock": true, 00:16:31.034 "num_base_bdevs": 4, 00:16:31.034 "num_base_bdevs_discovered": 4, 00:16:31.034 "num_base_bdevs_operational": 4, 00:16:31.034 "base_bdevs_list": [ 00:16:31.034 { 00:16:31.034 "name": "BaseBdev1", 00:16:31.034 "uuid": "f273471f-6c52-49e8-9aef-5552984ba436", 00:16:31.034 "is_configured": true, 00:16:31.034 "data_offset": 2048, 00:16:31.034 "data_size": 63488 00:16:31.034 }, 00:16:31.034 { 00:16:31.034 "name": "BaseBdev2", 00:16:31.034 "uuid": "d9982715-2e55-4cc1-aa77-d2f82001aafb", 00:16:31.034 "is_configured": true, 00:16:31.034 "data_offset": 2048, 00:16:31.034 "data_size": 63488 00:16:31.034 }, 00:16:31.034 { 00:16:31.034 "name": "BaseBdev3", 00:16:31.034 "uuid": "5e217cfb-c788-4148-b4cf-b5924ee10464", 00:16:31.034 "is_configured": true, 00:16:31.034 "data_offset": 2048, 00:16:31.034 "data_size": 63488 00:16:31.034 }, 00:16:31.034 { 00:16:31.034 "name": "BaseBdev4", 00:16:31.034 "uuid": "85e0ba00-79e3-4013-98ec-9c919e446f33", 00:16:31.034 "is_configured": true, 00:16:31.034 "data_offset": 2048, 00:16:31.034 "data_size": 63488 00:16:31.034 } 00:16:31.034 ] 00:16:31.034 } 00:16:31.034 } 00:16:31.034 }' 00:16:31.034 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.034 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:31.034 BaseBdev2 00:16:31.034 BaseBdev3 00:16:31.034 BaseBdev4' 00:16:31.034 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.034 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:31.034 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.034 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:31.034 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.034 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.034 07:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.034 07:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:31.293 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.294 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.294 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.294 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.294 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.294 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:31.294 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.294 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.294 [2024-11-29 07:48:21.147356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.553 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.553 "name": "Existed_Raid", 00:16:31.553 "uuid": "8913d988-76c3-4c1d-98e6-53de27c6d4a9", 00:16:31.553 "strip_size_kb": 64, 00:16:31.553 "state": "online", 00:16:31.553 "raid_level": "raid5f", 00:16:31.553 "superblock": true, 00:16:31.553 "num_base_bdevs": 4, 00:16:31.553 "num_base_bdevs_discovered": 3, 00:16:31.553 "num_base_bdevs_operational": 3, 00:16:31.553 "base_bdevs_list": [ 00:16:31.553 { 00:16:31.553 "name": null, 00:16:31.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.554 "is_configured": false, 00:16:31.554 "data_offset": 0, 00:16:31.554 "data_size": 63488 00:16:31.554 }, 00:16:31.554 { 00:16:31.554 "name": "BaseBdev2", 00:16:31.554 "uuid": "d9982715-2e55-4cc1-aa77-d2f82001aafb", 00:16:31.554 "is_configured": true, 00:16:31.554 "data_offset": 2048, 00:16:31.554 "data_size": 63488 00:16:31.554 }, 00:16:31.554 { 00:16:31.554 "name": "BaseBdev3", 00:16:31.554 "uuid": "5e217cfb-c788-4148-b4cf-b5924ee10464", 00:16:31.554 "is_configured": true, 00:16:31.554 "data_offset": 2048, 00:16:31.554 "data_size": 63488 00:16:31.554 }, 00:16:31.554 { 00:16:31.554 "name": "BaseBdev4", 00:16:31.554 "uuid": "85e0ba00-79e3-4013-98ec-9c919e446f33", 00:16:31.554 "is_configured": true, 00:16:31.554 "data_offset": 2048, 00:16:31.554 "data_size": 63488 00:16:31.554 } 00:16:31.554 ] 00:16:31.554 }' 00:16:31.554 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.554 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.814 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:31.814 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:31.814 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.814 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:31.814 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.814 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.814 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.814 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:31.814 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:31.814 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:31.814 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.814 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.814 [2024-11-29 07:48:21.728771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:31.814 [2024-11-29 07:48:21.728983] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.074 [2024-11-29 07:48:21.821816] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.074 [2024-11-29 07:48:21.881743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:32.074 07:48:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.333 [2024-11-29 07:48:22.029342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:32.333 [2024-11-29 07:48:22.029443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.333 BaseBdev2 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.333 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.333 [ 00:16:32.333 { 00:16:32.333 "name": "BaseBdev2", 00:16:32.333 "aliases": [ 00:16:32.334 "86c9c36d-09cd-4345-8a27-9fd73a2e6750" 00:16:32.334 ], 00:16:32.334 "product_name": "Malloc disk", 00:16:32.334 "block_size": 512, 00:16:32.334 "num_blocks": 65536, 00:16:32.334 "uuid": "86c9c36d-09cd-4345-8a27-9fd73a2e6750", 00:16:32.334 "assigned_rate_limits": { 00:16:32.334 "rw_ios_per_sec": 0, 00:16:32.334 "rw_mbytes_per_sec": 0, 00:16:32.334 "r_mbytes_per_sec": 0, 00:16:32.334 "w_mbytes_per_sec": 0 00:16:32.334 }, 00:16:32.334 "claimed": false, 00:16:32.334 "zoned": false, 00:16:32.334 "supported_io_types": { 00:16:32.334 "read": true, 00:16:32.334 "write": true, 00:16:32.334 "unmap": true, 00:16:32.334 "flush": true, 00:16:32.334 "reset": true, 00:16:32.334 "nvme_admin": false, 00:16:32.334 "nvme_io": false, 00:16:32.334 "nvme_io_md": false, 00:16:32.334 "write_zeroes": true, 00:16:32.334 "zcopy": true, 00:16:32.334 "get_zone_info": false, 00:16:32.334 "zone_management": false, 00:16:32.334 "zone_append": false, 00:16:32.334 "compare": false, 00:16:32.334 "compare_and_write": false, 00:16:32.334 "abort": true, 00:16:32.334 "seek_hole": false, 00:16:32.334 "seek_data": false, 00:16:32.334 "copy": true, 00:16:32.334 "nvme_iov_md": false 00:16:32.334 }, 00:16:32.334 "memory_domains": [ 00:16:32.334 { 00:16:32.334 "dma_device_id": "system", 00:16:32.334 "dma_device_type": 1 00:16:32.334 }, 00:16:32.334 { 00:16:32.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.334 "dma_device_type": 2 00:16:32.334 } 00:16:32.334 ], 00:16:32.334 "driver_specific": {} 00:16:32.334 } 00:16:32.334 ] 00:16:32.334 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.334 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:32.334 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:32.334 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:32.334 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:32.334 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.334 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.594 BaseBdev3 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.594 [ 00:16:32.594 { 00:16:32.594 "name": "BaseBdev3", 00:16:32.594 "aliases": [ 00:16:32.594 "8899fabc-2add-4985-8739-448f1b0b5bf2" 00:16:32.594 ], 00:16:32.594 "product_name": "Malloc disk", 00:16:32.594 "block_size": 512, 00:16:32.594 "num_blocks": 65536, 00:16:32.594 "uuid": "8899fabc-2add-4985-8739-448f1b0b5bf2", 00:16:32.594 "assigned_rate_limits": { 00:16:32.594 "rw_ios_per_sec": 0, 00:16:32.594 "rw_mbytes_per_sec": 0, 00:16:32.594 "r_mbytes_per_sec": 0, 00:16:32.594 "w_mbytes_per_sec": 0 00:16:32.594 }, 00:16:32.594 "claimed": false, 00:16:32.594 "zoned": false, 00:16:32.594 "supported_io_types": { 00:16:32.594 "read": true, 00:16:32.594 "write": true, 00:16:32.594 "unmap": true, 00:16:32.594 "flush": true, 00:16:32.594 "reset": true, 00:16:32.594 "nvme_admin": false, 00:16:32.594 "nvme_io": false, 00:16:32.594 "nvme_io_md": false, 00:16:32.594 "write_zeroes": true, 00:16:32.594 "zcopy": true, 00:16:32.594 "get_zone_info": false, 00:16:32.594 "zone_management": false, 00:16:32.594 "zone_append": false, 00:16:32.594 "compare": false, 00:16:32.594 "compare_and_write": false, 00:16:32.594 "abort": true, 00:16:32.594 "seek_hole": false, 00:16:32.594 "seek_data": false, 00:16:32.594 "copy": true, 00:16:32.594 "nvme_iov_md": false 00:16:32.594 }, 00:16:32.594 "memory_domains": [ 00:16:32.594 { 00:16:32.594 "dma_device_id": "system", 00:16:32.594 "dma_device_type": 1 00:16:32.594 }, 00:16:32.594 { 00:16:32.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.594 "dma_device_type": 2 00:16:32.594 } 00:16:32.594 ], 00:16:32.594 "driver_specific": {} 00:16:32.594 } 00:16:32.594 ] 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.594 BaseBdev4 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.594 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.594 [ 00:16:32.594 { 00:16:32.594 "name": "BaseBdev4", 00:16:32.594 "aliases": [ 00:16:32.594 "6d6ba853-a545-4bcd-b8a4-4942e23b43bd" 00:16:32.594 ], 00:16:32.594 "product_name": "Malloc disk", 00:16:32.594 "block_size": 512, 00:16:32.594 "num_blocks": 65536, 00:16:32.594 "uuid": "6d6ba853-a545-4bcd-b8a4-4942e23b43bd", 00:16:32.594 "assigned_rate_limits": { 00:16:32.594 "rw_ios_per_sec": 0, 00:16:32.594 "rw_mbytes_per_sec": 0, 00:16:32.594 "r_mbytes_per_sec": 0, 00:16:32.594 "w_mbytes_per_sec": 0 00:16:32.594 }, 00:16:32.594 "claimed": false, 00:16:32.594 "zoned": false, 00:16:32.594 "supported_io_types": { 00:16:32.594 "read": true, 00:16:32.595 "write": true, 00:16:32.595 "unmap": true, 00:16:32.595 "flush": true, 00:16:32.595 "reset": true, 00:16:32.595 "nvme_admin": false, 00:16:32.595 "nvme_io": false, 00:16:32.595 "nvme_io_md": false, 00:16:32.595 "write_zeroes": true, 00:16:32.595 "zcopy": true, 00:16:32.595 "get_zone_info": false, 00:16:32.595 "zone_management": false, 00:16:32.595 "zone_append": false, 00:16:32.595 "compare": false, 00:16:32.595 "compare_and_write": false, 00:16:32.595 "abort": true, 00:16:32.595 "seek_hole": false, 00:16:32.595 "seek_data": false, 00:16:32.595 "copy": true, 00:16:32.595 "nvme_iov_md": false 00:16:32.595 }, 00:16:32.595 "memory_domains": [ 00:16:32.595 { 00:16:32.595 "dma_device_id": "system", 00:16:32.595 "dma_device_type": 1 00:16:32.595 }, 00:16:32.595 { 00:16:32.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.595 "dma_device_type": 2 00:16:32.595 } 00:16:32.595 ], 00:16:32.595 "driver_specific": {} 00:16:32.595 } 00:16:32.595 ] 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.595 [2024-11-29 07:48:22.408936] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.595 [2024-11-29 07:48:22.409022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.595 [2024-11-29 07:48:22.409047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.595 [2024-11-29 07:48:22.410756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:32.595 [2024-11-29 07:48:22.410807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.595 "name": "Existed_Raid", 00:16:32.595 "uuid": "49b89988-5d7a-424e-a05a-d86bd769f23a", 00:16:32.595 "strip_size_kb": 64, 00:16:32.595 "state": "configuring", 00:16:32.595 "raid_level": "raid5f", 00:16:32.595 "superblock": true, 00:16:32.595 "num_base_bdevs": 4, 00:16:32.595 "num_base_bdevs_discovered": 3, 00:16:32.595 "num_base_bdevs_operational": 4, 00:16:32.595 "base_bdevs_list": [ 00:16:32.595 { 00:16:32.595 "name": "BaseBdev1", 00:16:32.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.595 "is_configured": false, 00:16:32.595 "data_offset": 0, 00:16:32.595 "data_size": 0 00:16:32.595 }, 00:16:32.595 { 00:16:32.595 "name": "BaseBdev2", 00:16:32.595 "uuid": "86c9c36d-09cd-4345-8a27-9fd73a2e6750", 00:16:32.595 "is_configured": true, 00:16:32.595 "data_offset": 2048, 00:16:32.595 "data_size": 63488 00:16:32.595 }, 00:16:32.595 { 00:16:32.595 "name": "BaseBdev3", 00:16:32.595 "uuid": "8899fabc-2add-4985-8739-448f1b0b5bf2", 00:16:32.595 "is_configured": true, 00:16:32.595 "data_offset": 2048, 00:16:32.595 "data_size": 63488 00:16:32.595 }, 00:16:32.595 { 00:16:32.595 "name": "BaseBdev4", 00:16:32.595 "uuid": "6d6ba853-a545-4bcd-b8a4-4942e23b43bd", 00:16:32.595 "is_configured": true, 00:16:32.595 "data_offset": 2048, 00:16:32.595 "data_size": 63488 00:16:32.595 } 00:16:32.595 ] 00:16:32.595 }' 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.595 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.163 [2024-11-29 07:48:22.812251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.163 "name": "Existed_Raid", 00:16:33.163 "uuid": "49b89988-5d7a-424e-a05a-d86bd769f23a", 00:16:33.163 "strip_size_kb": 64, 00:16:33.163 "state": "configuring", 00:16:33.163 "raid_level": "raid5f", 00:16:33.163 "superblock": true, 00:16:33.163 "num_base_bdevs": 4, 00:16:33.163 "num_base_bdevs_discovered": 2, 00:16:33.163 "num_base_bdevs_operational": 4, 00:16:33.163 "base_bdevs_list": [ 00:16:33.163 { 00:16:33.163 "name": "BaseBdev1", 00:16:33.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.163 "is_configured": false, 00:16:33.163 "data_offset": 0, 00:16:33.163 "data_size": 0 00:16:33.163 }, 00:16:33.163 { 00:16:33.163 "name": null, 00:16:33.163 "uuid": "86c9c36d-09cd-4345-8a27-9fd73a2e6750", 00:16:33.163 "is_configured": false, 00:16:33.163 "data_offset": 0, 00:16:33.163 "data_size": 63488 00:16:33.163 }, 00:16:33.163 { 00:16:33.163 "name": "BaseBdev3", 00:16:33.163 "uuid": "8899fabc-2add-4985-8739-448f1b0b5bf2", 00:16:33.163 "is_configured": true, 00:16:33.163 "data_offset": 2048, 00:16:33.163 "data_size": 63488 00:16:33.163 }, 00:16:33.163 { 00:16:33.163 "name": "BaseBdev4", 00:16:33.163 "uuid": "6d6ba853-a545-4bcd-b8a4-4942e23b43bd", 00:16:33.163 "is_configured": true, 00:16:33.163 "data_offset": 2048, 00:16:33.163 "data_size": 63488 00:16:33.163 } 00:16:33.163 ] 00:16:33.163 }' 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.163 07:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.422 [2024-11-29 07:48:23.326921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.422 BaseBdev1 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:33.422 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.423 [ 00:16:33.423 { 00:16:33.423 "name": "BaseBdev1", 00:16:33.423 "aliases": [ 00:16:33.423 "18325ade-f071-4b8f-8139-04e05f1f6cc3" 00:16:33.423 ], 00:16:33.423 "product_name": "Malloc disk", 00:16:33.423 "block_size": 512, 00:16:33.423 "num_blocks": 65536, 00:16:33.423 "uuid": "18325ade-f071-4b8f-8139-04e05f1f6cc3", 00:16:33.423 "assigned_rate_limits": { 00:16:33.423 "rw_ios_per_sec": 0, 00:16:33.423 "rw_mbytes_per_sec": 0, 00:16:33.423 "r_mbytes_per_sec": 0, 00:16:33.423 "w_mbytes_per_sec": 0 00:16:33.423 }, 00:16:33.423 "claimed": true, 00:16:33.423 "claim_type": "exclusive_write", 00:16:33.423 "zoned": false, 00:16:33.423 "supported_io_types": { 00:16:33.423 "read": true, 00:16:33.423 "write": true, 00:16:33.423 "unmap": true, 00:16:33.423 "flush": true, 00:16:33.423 "reset": true, 00:16:33.423 "nvme_admin": false, 00:16:33.423 "nvme_io": false, 00:16:33.423 "nvme_io_md": false, 00:16:33.423 "write_zeroes": true, 00:16:33.423 "zcopy": true, 00:16:33.423 "get_zone_info": false, 00:16:33.423 "zone_management": false, 00:16:33.423 "zone_append": false, 00:16:33.423 "compare": false, 00:16:33.423 "compare_and_write": false, 00:16:33.423 "abort": true, 00:16:33.423 "seek_hole": false, 00:16:33.423 "seek_data": false, 00:16:33.423 "copy": true, 00:16:33.423 "nvme_iov_md": false 00:16:33.423 }, 00:16:33.423 "memory_domains": [ 00:16:33.423 { 00:16:33.423 "dma_device_id": "system", 00:16:33.423 "dma_device_type": 1 00:16:33.423 }, 00:16:33.423 { 00:16:33.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.423 "dma_device_type": 2 00:16:33.423 } 00:16:33.423 ], 00:16:33.423 "driver_specific": {} 00:16:33.423 } 00:16:33.423 ] 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.423 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.683 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.683 "name": "Existed_Raid", 00:16:33.683 "uuid": "49b89988-5d7a-424e-a05a-d86bd769f23a", 00:16:33.683 "strip_size_kb": 64, 00:16:33.683 "state": "configuring", 00:16:33.683 "raid_level": "raid5f", 00:16:33.683 "superblock": true, 00:16:33.683 "num_base_bdevs": 4, 00:16:33.683 "num_base_bdevs_discovered": 3, 00:16:33.683 "num_base_bdevs_operational": 4, 00:16:33.683 "base_bdevs_list": [ 00:16:33.683 { 00:16:33.684 "name": "BaseBdev1", 00:16:33.684 "uuid": "18325ade-f071-4b8f-8139-04e05f1f6cc3", 00:16:33.684 "is_configured": true, 00:16:33.684 "data_offset": 2048, 00:16:33.684 "data_size": 63488 00:16:33.684 }, 00:16:33.684 { 00:16:33.684 "name": null, 00:16:33.684 "uuid": "86c9c36d-09cd-4345-8a27-9fd73a2e6750", 00:16:33.684 "is_configured": false, 00:16:33.684 "data_offset": 0, 00:16:33.684 "data_size": 63488 00:16:33.684 }, 00:16:33.684 { 00:16:33.684 "name": "BaseBdev3", 00:16:33.684 "uuid": "8899fabc-2add-4985-8739-448f1b0b5bf2", 00:16:33.684 "is_configured": true, 00:16:33.684 "data_offset": 2048, 00:16:33.684 "data_size": 63488 00:16:33.684 }, 00:16:33.684 { 00:16:33.684 "name": "BaseBdev4", 00:16:33.684 "uuid": "6d6ba853-a545-4bcd-b8a4-4942e23b43bd", 00:16:33.684 "is_configured": true, 00:16:33.684 "data_offset": 2048, 00:16:33.684 "data_size": 63488 00:16:33.684 } 00:16:33.684 ] 00:16:33.684 }' 00:16:33.684 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.684 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.945 [2024-11-29 07:48:23.838152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.945 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.205 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.205 "name": "Existed_Raid", 00:16:34.205 "uuid": "49b89988-5d7a-424e-a05a-d86bd769f23a", 00:16:34.205 "strip_size_kb": 64, 00:16:34.205 "state": "configuring", 00:16:34.205 "raid_level": "raid5f", 00:16:34.205 "superblock": true, 00:16:34.205 "num_base_bdevs": 4, 00:16:34.205 "num_base_bdevs_discovered": 2, 00:16:34.205 "num_base_bdevs_operational": 4, 00:16:34.205 "base_bdevs_list": [ 00:16:34.205 { 00:16:34.205 "name": "BaseBdev1", 00:16:34.205 "uuid": "18325ade-f071-4b8f-8139-04e05f1f6cc3", 00:16:34.205 "is_configured": true, 00:16:34.205 "data_offset": 2048, 00:16:34.205 "data_size": 63488 00:16:34.205 }, 00:16:34.205 { 00:16:34.205 "name": null, 00:16:34.205 "uuid": "86c9c36d-09cd-4345-8a27-9fd73a2e6750", 00:16:34.205 "is_configured": false, 00:16:34.205 "data_offset": 0, 00:16:34.205 "data_size": 63488 00:16:34.205 }, 00:16:34.205 { 00:16:34.205 "name": null, 00:16:34.205 "uuid": "8899fabc-2add-4985-8739-448f1b0b5bf2", 00:16:34.205 "is_configured": false, 00:16:34.205 "data_offset": 0, 00:16:34.205 "data_size": 63488 00:16:34.205 }, 00:16:34.205 { 00:16:34.205 "name": "BaseBdev4", 00:16:34.205 "uuid": "6d6ba853-a545-4bcd-b8a4-4942e23b43bd", 00:16:34.205 "is_configured": true, 00:16:34.205 "data_offset": 2048, 00:16:34.205 "data_size": 63488 00:16:34.205 } 00:16:34.205 ] 00:16:34.205 }' 00:16:34.206 07:48:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.206 07:48:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.465 [2024-11-29 07:48:24.333286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.465 "name": "Existed_Raid", 00:16:34.465 "uuid": "49b89988-5d7a-424e-a05a-d86bd769f23a", 00:16:34.465 "strip_size_kb": 64, 00:16:34.465 "state": "configuring", 00:16:34.465 "raid_level": "raid5f", 00:16:34.465 "superblock": true, 00:16:34.465 "num_base_bdevs": 4, 00:16:34.465 "num_base_bdevs_discovered": 3, 00:16:34.465 "num_base_bdevs_operational": 4, 00:16:34.465 "base_bdevs_list": [ 00:16:34.465 { 00:16:34.465 "name": "BaseBdev1", 00:16:34.465 "uuid": "18325ade-f071-4b8f-8139-04e05f1f6cc3", 00:16:34.465 "is_configured": true, 00:16:34.465 "data_offset": 2048, 00:16:34.465 "data_size": 63488 00:16:34.465 }, 00:16:34.465 { 00:16:34.465 "name": null, 00:16:34.465 "uuid": "86c9c36d-09cd-4345-8a27-9fd73a2e6750", 00:16:34.465 "is_configured": false, 00:16:34.465 "data_offset": 0, 00:16:34.465 "data_size": 63488 00:16:34.465 }, 00:16:34.465 { 00:16:34.465 "name": "BaseBdev3", 00:16:34.465 "uuid": "8899fabc-2add-4985-8739-448f1b0b5bf2", 00:16:34.465 "is_configured": true, 00:16:34.465 "data_offset": 2048, 00:16:34.465 "data_size": 63488 00:16:34.465 }, 00:16:34.465 { 00:16:34.465 "name": "BaseBdev4", 00:16:34.465 "uuid": "6d6ba853-a545-4bcd-b8a4-4942e23b43bd", 00:16:34.465 "is_configured": true, 00:16:34.465 "data_offset": 2048, 00:16:34.465 "data_size": 63488 00:16:34.465 } 00:16:34.465 ] 00:16:34.465 }' 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.465 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.034 [2024-11-29 07:48:24.808477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.034 "name": "Existed_Raid", 00:16:35.034 "uuid": "49b89988-5d7a-424e-a05a-d86bd769f23a", 00:16:35.034 "strip_size_kb": 64, 00:16:35.034 "state": "configuring", 00:16:35.034 "raid_level": "raid5f", 00:16:35.034 "superblock": true, 00:16:35.034 "num_base_bdevs": 4, 00:16:35.034 "num_base_bdevs_discovered": 2, 00:16:35.034 "num_base_bdevs_operational": 4, 00:16:35.034 "base_bdevs_list": [ 00:16:35.034 { 00:16:35.034 "name": null, 00:16:35.034 "uuid": "18325ade-f071-4b8f-8139-04e05f1f6cc3", 00:16:35.034 "is_configured": false, 00:16:35.034 "data_offset": 0, 00:16:35.034 "data_size": 63488 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "name": null, 00:16:35.034 "uuid": "86c9c36d-09cd-4345-8a27-9fd73a2e6750", 00:16:35.034 "is_configured": false, 00:16:35.034 "data_offset": 0, 00:16:35.034 "data_size": 63488 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "name": "BaseBdev3", 00:16:35.034 "uuid": "8899fabc-2add-4985-8739-448f1b0b5bf2", 00:16:35.034 "is_configured": true, 00:16:35.034 "data_offset": 2048, 00:16:35.034 "data_size": 63488 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "name": "BaseBdev4", 00:16:35.034 "uuid": "6d6ba853-a545-4bcd-b8a4-4942e23b43bd", 00:16:35.034 "is_configured": true, 00:16:35.034 "data_offset": 2048, 00:16:35.034 "data_size": 63488 00:16:35.034 } 00:16:35.034 ] 00:16:35.034 }' 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.034 07:48:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.617 [2024-11-29 07:48:25.391374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.617 "name": "Existed_Raid", 00:16:35.617 "uuid": "49b89988-5d7a-424e-a05a-d86bd769f23a", 00:16:35.617 "strip_size_kb": 64, 00:16:35.617 "state": "configuring", 00:16:35.617 "raid_level": "raid5f", 00:16:35.617 "superblock": true, 00:16:35.617 "num_base_bdevs": 4, 00:16:35.617 "num_base_bdevs_discovered": 3, 00:16:35.617 "num_base_bdevs_operational": 4, 00:16:35.617 "base_bdevs_list": [ 00:16:35.617 { 00:16:35.617 "name": null, 00:16:35.617 "uuid": "18325ade-f071-4b8f-8139-04e05f1f6cc3", 00:16:35.617 "is_configured": false, 00:16:35.617 "data_offset": 0, 00:16:35.617 "data_size": 63488 00:16:35.617 }, 00:16:35.617 { 00:16:35.617 "name": "BaseBdev2", 00:16:35.617 "uuid": "86c9c36d-09cd-4345-8a27-9fd73a2e6750", 00:16:35.617 "is_configured": true, 00:16:35.617 "data_offset": 2048, 00:16:35.617 "data_size": 63488 00:16:35.617 }, 00:16:35.617 { 00:16:35.617 "name": "BaseBdev3", 00:16:35.617 "uuid": "8899fabc-2add-4985-8739-448f1b0b5bf2", 00:16:35.617 "is_configured": true, 00:16:35.617 "data_offset": 2048, 00:16:35.617 "data_size": 63488 00:16:35.617 }, 00:16:35.617 { 00:16:35.617 "name": "BaseBdev4", 00:16:35.617 "uuid": "6d6ba853-a545-4bcd-b8a4-4942e23b43bd", 00:16:35.617 "is_configured": true, 00:16:35.617 "data_offset": 2048, 00:16:35.617 "data_size": 63488 00:16:35.617 } 00:16:35.617 ] 00:16:35.617 }' 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.617 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 18325ade-f071-4b8f-8139-04e05f1f6cc3 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.187 [2024-11-29 07:48:25.953781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:36.187 [2024-11-29 07:48:25.954085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:36.187 [2024-11-29 07:48:25.954156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:36.187 [2024-11-29 07:48:25.954443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:36.187 NewBaseBdev 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.187 [2024-11-29 07:48:25.961677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:36.187 [2024-11-29 07:48:25.961736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:36.187 [2024-11-29 07:48:25.961902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.187 [ 00:16:36.187 { 00:16:36.187 "name": "NewBaseBdev", 00:16:36.187 "aliases": [ 00:16:36.187 "18325ade-f071-4b8f-8139-04e05f1f6cc3" 00:16:36.187 ], 00:16:36.187 "product_name": "Malloc disk", 00:16:36.187 "block_size": 512, 00:16:36.187 "num_blocks": 65536, 00:16:36.187 "uuid": "18325ade-f071-4b8f-8139-04e05f1f6cc3", 00:16:36.187 "assigned_rate_limits": { 00:16:36.187 "rw_ios_per_sec": 0, 00:16:36.187 "rw_mbytes_per_sec": 0, 00:16:36.187 "r_mbytes_per_sec": 0, 00:16:36.187 "w_mbytes_per_sec": 0 00:16:36.187 }, 00:16:36.187 "claimed": true, 00:16:36.187 "claim_type": "exclusive_write", 00:16:36.187 "zoned": false, 00:16:36.187 "supported_io_types": { 00:16:36.187 "read": true, 00:16:36.187 "write": true, 00:16:36.187 "unmap": true, 00:16:36.187 "flush": true, 00:16:36.187 "reset": true, 00:16:36.187 "nvme_admin": false, 00:16:36.187 "nvme_io": false, 00:16:36.187 "nvme_io_md": false, 00:16:36.187 "write_zeroes": true, 00:16:36.187 "zcopy": true, 00:16:36.187 "get_zone_info": false, 00:16:36.187 "zone_management": false, 00:16:36.187 "zone_append": false, 00:16:36.187 "compare": false, 00:16:36.187 "compare_and_write": false, 00:16:36.187 "abort": true, 00:16:36.187 "seek_hole": false, 00:16:36.187 "seek_data": false, 00:16:36.187 "copy": true, 00:16:36.187 "nvme_iov_md": false 00:16:36.187 }, 00:16:36.187 "memory_domains": [ 00:16:36.187 { 00:16:36.187 "dma_device_id": "system", 00:16:36.187 "dma_device_type": 1 00:16:36.187 }, 00:16:36.187 { 00:16:36.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.187 "dma_device_type": 2 00:16:36.187 } 00:16:36.187 ], 00:16:36.187 "driver_specific": {} 00:16:36.187 } 00:16:36.187 ] 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.187 07:48:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.187 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.187 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.187 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.187 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.187 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.187 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.187 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.187 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.187 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.187 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.187 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.187 "name": "Existed_Raid", 00:16:36.187 "uuid": "49b89988-5d7a-424e-a05a-d86bd769f23a", 00:16:36.187 "strip_size_kb": 64, 00:16:36.187 "state": "online", 00:16:36.187 "raid_level": "raid5f", 00:16:36.187 "superblock": true, 00:16:36.187 "num_base_bdevs": 4, 00:16:36.188 "num_base_bdevs_discovered": 4, 00:16:36.188 "num_base_bdevs_operational": 4, 00:16:36.188 "base_bdevs_list": [ 00:16:36.188 { 00:16:36.188 "name": "NewBaseBdev", 00:16:36.188 "uuid": "18325ade-f071-4b8f-8139-04e05f1f6cc3", 00:16:36.188 "is_configured": true, 00:16:36.188 "data_offset": 2048, 00:16:36.188 "data_size": 63488 00:16:36.188 }, 00:16:36.188 { 00:16:36.188 "name": "BaseBdev2", 00:16:36.188 "uuid": "86c9c36d-09cd-4345-8a27-9fd73a2e6750", 00:16:36.188 "is_configured": true, 00:16:36.188 "data_offset": 2048, 00:16:36.188 "data_size": 63488 00:16:36.188 }, 00:16:36.188 { 00:16:36.188 "name": "BaseBdev3", 00:16:36.188 "uuid": "8899fabc-2add-4985-8739-448f1b0b5bf2", 00:16:36.188 "is_configured": true, 00:16:36.188 "data_offset": 2048, 00:16:36.188 "data_size": 63488 00:16:36.188 }, 00:16:36.188 { 00:16:36.188 "name": "BaseBdev4", 00:16:36.188 "uuid": "6d6ba853-a545-4bcd-b8a4-4942e23b43bd", 00:16:36.188 "is_configured": true, 00:16:36.188 "data_offset": 2048, 00:16:36.188 "data_size": 63488 00:16:36.188 } 00:16:36.188 ] 00:16:36.188 }' 00:16:36.188 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.188 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.757 [2024-11-29 07:48:26.469193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.757 "name": "Existed_Raid", 00:16:36.757 "aliases": [ 00:16:36.757 "49b89988-5d7a-424e-a05a-d86bd769f23a" 00:16:36.757 ], 00:16:36.757 "product_name": "Raid Volume", 00:16:36.757 "block_size": 512, 00:16:36.757 "num_blocks": 190464, 00:16:36.757 "uuid": "49b89988-5d7a-424e-a05a-d86bd769f23a", 00:16:36.757 "assigned_rate_limits": { 00:16:36.757 "rw_ios_per_sec": 0, 00:16:36.757 "rw_mbytes_per_sec": 0, 00:16:36.757 "r_mbytes_per_sec": 0, 00:16:36.757 "w_mbytes_per_sec": 0 00:16:36.757 }, 00:16:36.757 "claimed": false, 00:16:36.757 "zoned": false, 00:16:36.757 "supported_io_types": { 00:16:36.757 "read": true, 00:16:36.757 "write": true, 00:16:36.757 "unmap": false, 00:16:36.757 "flush": false, 00:16:36.757 "reset": true, 00:16:36.757 "nvme_admin": false, 00:16:36.757 "nvme_io": false, 00:16:36.757 "nvme_io_md": false, 00:16:36.757 "write_zeroes": true, 00:16:36.757 "zcopy": false, 00:16:36.757 "get_zone_info": false, 00:16:36.757 "zone_management": false, 00:16:36.757 "zone_append": false, 00:16:36.757 "compare": false, 00:16:36.757 "compare_and_write": false, 00:16:36.757 "abort": false, 00:16:36.757 "seek_hole": false, 00:16:36.757 "seek_data": false, 00:16:36.757 "copy": false, 00:16:36.757 "nvme_iov_md": false 00:16:36.757 }, 00:16:36.757 "driver_specific": { 00:16:36.757 "raid": { 00:16:36.757 "uuid": "49b89988-5d7a-424e-a05a-d86bd769f23a", 00:16:36.757 "strip_size_kb": 64, 00:16:36.757 "state": "online", 00:16:36.757 "raid_level": "raid5f", 00:16:36.757 "superblock": true, 00:16:36.757 "num_base_bdevs": 4, 00:16:36.757 "num_base_bdevs_discovered": 4, 00:16:36.757 "num_base_bdevs_operational": 4, 00:16:36.757 "base_bdevs_list": [ 00:16:36.757 { 00:16:36.757 "name": "NewBaseBdev", 00:16:36.757 "uuid": "18325ade-f071-4b8f-8139-04e05f1f6cc3", 00:16:36.757 "is_configured": true, 00:16:36.757 "data_offset": 2048, 00:16:36.757 "data_size": 63488 00:16:36.757 }, 00:16:36.757 { 00:16:36.757 "name": "BaseBdev2", 00:16:36.757 "uuid": "86c9c36d-09cd-4345-8a27-9fd73a2e6750", 00:16:36.757 "is_configured": true, 00:16:36.757 "data_offset": 2048, 00:16:36.757 "data_size": 63488 00:16:36.757 }, 00:16:36.757 { 00:16:36.757 "name": "BaseBdev3", 00:16:36.757 "uuid": "8899fabc-2add-4985-8739-448f1b0b5bf2", 00:16:36.757 "is_configured": true, 00:16:36.757 "data_offset": 2048, 00:16:36.757 "data_size": 63488 00:16:36.757 }, 00:16:36.757 { 00:16:36.757 "name": "BaseBdev4", 00:16:36.757 "uuid": "6d6ba853-a545-4bcd-b8a4-4942e23b43bd", 00:16:36.757 "is_configured": true, 00:16:36.757 "data_offset": 2048, 00:16:36.757 "data_size": 63488 00:16:36.757 } 00:16:36.757 ] 00:16:36.757 } 00:16:36.757 } 00:16:36.757 }' 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:36.757 BaseBdev2 00:16:36.757 BaseBdev3 00:16:36.757 BaseBdev4' 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.757 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.758 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.758 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:36.758 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.758 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.758 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.018 [2024-11-29 07:48:26.764446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.018 [2024-11-29 07:48:26.764472] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.018 [2024-11-29 07:48:26.764537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.018 [2024-11-29 07:48:26.764818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.018 [2024-11-29 07:48:26.764828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83120 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83120 ']' 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83120 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83120 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83120' 00:16:37.018 killing process with pid 83120 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83120 00:16:37.018 [2024-11-29 07:48:26.811327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.018 07:48:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83120 00:16:37.278 [2024-11-29 07:48:27.181567] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:38.659 07:48:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:38.659 00:16:38.659 real 0m11.311s 00:16:38.659 user 0m17.959s 00:16:38.659 sys 0m2.078s 00:16:38.659 07:48:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.659 ************************************ 00:16:38.659 END TEST raid5f_state_function_test_sb 00:16:38.659 ************************************ 00:16:38.659 07:48:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.659 07:48:28 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:38.659 07:48:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:38.659 07:48:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.659 07:48:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.659 ************************************ 00:16:38.659 START TEST raid5f_superblock_test 00:16:38.659 ************************************ 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83785 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83785 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83785 ']' 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.659 07:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.659 [2024-11-29 07:48:28.420526] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:38.659 [2024-11-29 07:48:28.420692] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83785 ] 00:16:38.659 [2024-11-29 07:48:28.593560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.919 [2024-11-29 07:48:28.699413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.179 [2024-11-29 07:48:28.898732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.179 [2024-11-29 07:48:28.898764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.439 malloc1 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.439 [2024-11-29 07:48:29.288062] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:39.439 [2024-11-29 07:48:29.288188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.439 [2024-11-29 07:48:29.288228] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:39.439 [2024-11-29 07:48:29.288257] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.439 [2024-11-29 07:48:29.290253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.439 [2024-11-29 07:48:29.290321] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:39.439 pt1 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:39.439 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.440 malloc2 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.440 [2024-11-29 07:48:29.349498] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:39.440 [2024-11-29 07:48:29.349548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.440 [2024-11-29 07:48:29.349572] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:39.440 [2024-11-29 07:48:29.349580] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.440 [2024-11-29 07:48:29.351718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.440 [2024-11-29 07:48:29.351753] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:39.440 pt2 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.440 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.700 malloc3 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.700 [2024-11-29 07:48:29.434133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:39.700 [2024-11-29 07:48:29.434195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.700 [2024-11-29 07:48:29.434215] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:39.700 [2024-11-29 07:48:29.434223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.700 [2024-11-29 07:48:29.436232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.700 [2024-11-29 07:48:29.436268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:39.700 pt3 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.700 malloc4 00:16:39.700 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.701 [2024-11-29 07:48:29.487570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:39.701 [2024-11-29 07:48:29.487636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.701 [2024-11-29 07:48:29.487655] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:39.701 [2024-11-29 07:48:29.487663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.701 [2024-11-29 07:48:29.489706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.701 [2024-11-29 07:48:29.489738] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:39.701 pt4 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.701 [2024-11-29 07:48:29.499589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:39.701 [2024-11-29 07:48:29.501344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:39.701 [2024-11-29 07:48:29.501430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:39.701 [2024-11-29 07:48:29.501476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:39.701 [2024-11-29 07:48:29.501657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:39.701 [2024-11-29 07:48:29.501695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:39.701 [2024-11-29 07:48:29.501935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:39.701 [2024-11-29 07:48:29.509168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:39.701 [2024-11-29 07:48:29.509193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:39.701 [2024-11-29 07:48:29.509364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.701 "name": "raid_bdev1", 00:16:39.701 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:39.701 "strip_size_kb": 64, 00:16:39.701 "state": "online", 00:16:39.701 "raid_level": "raid5f", 00:16:39.701 "superblock": true, 00:16:39.701 "num_base_bdevs": 4, 00:16:39.701 "num_base_bdevs_discovered": 4, 00:16:39.701 "num_base_bdevs_operational": 4, 00:16:39.701 "base_bdevs_list": [ 00:16:39.701 { 00:16:39.701 "name": "pt1", 00:16:39.701 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.701 "is_configured": true, 00:16:39.701 "data_offset": 2048, 00:16:39.701 "data_size": 63488 00:16:39.701 }, 00:16:39.701 { 00:16:39.701 "name": "pt2", 00:16:39.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.701 "is_configured": true, 00:16:39.701 "data_offset": 2048, 00:16:39.701 "data_size": 63488 00:16:39.701 }, 00:16:39.701 { 00:16:39.701 "name": "pt3", 00:16:39.701 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.701 "is_configured": true, 00:16:39.701 "data_offset": 2048, 00:16:39.701 "data_size": 63488 00:16:39.701 }, 00:16:39.701 { 00:16:39.701 "name": "pt4", 00:16:39.701 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:39.701 "is_configured": true, 00:16:39.701 "data_offset": 2048, 00:16:39.701 "data_size": 63488 00:16:39.701 } 00:16:39.701 ] 00:16:39.701 }' 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.701 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.270 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:40.271 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:40.271 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:40.271 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:40.271 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:40.271 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:40.271 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:40.271 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:40.271 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.271 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.271 [2024-11-29 07:48:29.953274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.271 07:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.271 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:40.271 "name": "raid_bdev1", 00:16:40.271 "aliases": [ 00:16:40.271 "35895661-151f-4da5-888e-2ada5dcc2c48" 00:16:40.271 ], 00:16:40.271 "product_name": "Raid Volume", 00:16:40.271 "block_size": 512, 00:16:40.271 "num_blocks": 190464, 00:16:40.271 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:40.271 "assigned_rate_limits": { 00:16:40.271 "rw_ios_per_sec": 0, 00:16:40.271 "rw_mbytes_per_sec": 0, 00:16:40.271 "r_mbytes_per_sec": 0, 00:16:40.271 "w_mbytes_per_sec": 0 00:16:40.271 }, 00:16:40.271 "claimed": false, 00:16:40.271 "zoned": false, 00:16:40.271 "supported_io_types": { 00:16:40.271 "read": true, 00:16:40.271 "write": true, 00:16:40.271 "unmap": false, 00:16:40.271 "flush": false, 00:16:40.271 "reset": true, 00:16:40.271 "nvme_admin": false, 00:16:40.271 "nvme_io": false, 00:16:40.271 "nvme_io_md": false, 00:16:40.271 "write_zeroes": true, 00:16:40.271 "zcopy": false, 00:16:40.271 "get_zone_info": false, 00:16:40.271 "zone_management": false, 00:16:40.271 "zone_append": false, 00:16:40.271 "compare": false, 00:16:40.271 "compare_and_write": false, 00:16:40.271 "abort": false, 00:16:40.271 "seek_hole": false, 00:16:40.271 "seek_data": false, 00:16:40.271 "copy": false, 00:16:40.271 "nvme_iov_md": false 00:16:40.271 }, 00:16:40.271 "driver_specific": { 00:16:40.271 "raid": { 00:16:40.271 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:40.271 "strip_size_kb": 64, 00:16:40.271 "state": "online", 00:16:40.271 "raid_level": "raid5f", 00:16:40.271 "superblock": true, 00:16:40.271 "num_base_bdevs": 4, 00:16:40.271 "num_base_bdevs_discovered": 4, 00:16:40.271 "num_base_bdevs_operational": 4, 00:16:40.271 "base_bdevs_list": [ 00:16:40.271 { 00:16:40.271 "name": "pt1", 00:16:40.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.271 "is_configured": true, 00:16:40.271 "data_offset": 2048, 00:16:40.271 "data_size": 63488 00:16:40.271 }, 00:16:40.271 { 00:16:40.271 "name": "pt2", 00:16:40.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.271 "is_configured": true, 00:16:40.271 "data_offset": 2048, 00:16:40.271 "data_size": 63488 00:16:40.271 }, 00:16:40.271 { 00:16:40.271 "name": "pt3", 00:16:40.271 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.271 "is_configured": true, 00:16:40.271 "data_offset": 2048, 00:16:40.271 "data_size": 63488 00:16:40.271 }, 00:16:40.271 { 00:16:40.271 "name": "pt4", 00:16:40.271 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:40.271 "is_configured": true, 00:16:40.271 "data_offset": 2048, 00:16:40.271 "data_size": 63488 00:16:40.271 } 00:16:40.271 ] 00:16:40.271 } 00:16:40.271 } 00:16:40.271 }' 00:16:40.271 07:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:40.271 pt2 00:16:40.271 pt3 00:16:40.271 pt4' 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.271 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 [2024-11-29 07:48:30.240692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=35895661-151f-4da5-888e-2ada5dcc2c48 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 35895661-151f-4da5-888e-2ada5dcc2c48 ']' 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 [2024-11-29 07:48:30.288456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.532 [2024-11-29 07:48:30.288482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.532 [2024-11-29 07:48:30.288549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.532 [2024-11-29 07:48:30.288627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.532 [2024-11-29 07:48:30.288641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.532 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 [2024-11-29 07:48:30.436226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:40.532 [2024-11-29 07:48:30.438010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:40.532 [2024-11-29 07:48:30.438076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:40.532 [2024-11-29 07:48:30.438107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:40.532 [2024-11-29 07:48:30.438162] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:40.532 [2024-11-29 07:48:30.438200] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:40.533 [2024-11-29 07:48:30.438217] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:40.533 [2024-11-29 07:48:30.438234] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:40.533 [2024-11-29 07:48:30.438245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.533 [2024-11-29 07:48:30.438255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:40.533 request: 00:16:40.533 { 00:16:40.533 "name": "raid_bdev1", 00:16:40.533 "raid_level": "raid5f", 00:16:40.533 "base_bdevs": [ 00:16:40.533 "malloc1", 00:16:40.533 "malloc2", 00:16:40.533 "malloc3", 00:16:40.533 "malloc4" 00:16:40.533 ], 00:16:40.533 "strip_size_kb": 64, 00:16:40.533 "superblock": false, 00:16:40.533 "method": "bdev_raid_create", 00:16:40.533 "req_id": 1 00:16:40.533 } 00:16:40.533 Got JSON-RPC error response 00:16:40.533 response: 00:16:40.533 { 00:16:40.533 "code": -17, 00:16:40.533 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:40.533 } 00:16:40.533 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:40.533 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:40.533 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:40.533 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:40.533 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:40.533 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.533 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.533 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.533 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:40.533 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.793 [2024-11-29 07:48:30.500094] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:40.793 [2024-11-29 07:48:30.500164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.793 [2024-11-29 07:48:30.500179] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:40.793 [2024-11-29 07:48:30.500190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.793 [2024-11-29 07:48:30.502340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.793 [2024-11-29 07:48:30.502378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:40.793 [2024-11-29 07:48:30.502440] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:40.793 [2024-11-29 07:48:30.502496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:40.793 pt1 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.793 "name": "raid_bdev1", 00:16:40.793 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:40.793 "strip_size_kb": 64, 00:16:40.793 "state": "configuring", 00:16:40.793 "raid_level": "raid5f", 00:16:40.793 "superblock": true, 00:16:40.793 "num_base_bdevs": 4, 00:16:40.793 "num_base_bdevs_discovered": 1, 00:16:40.793 "num_base_bdevs_operational": 4, 00:16:40.793 "base_bdevs_list": [ 00:16:40.793 { 00:16:40.793 "name": "pt1", 00:16:40.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.793 "is_configured": true, 00:16:40.793 "data_offset": 2048, 00:16:40.793 "data_size": 63488 00:16:40.793 }, 00:16:40.793 { 00:16:40.793 "name": null, 00:16:40.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.793 "is_configured": false, 00:16:40.793 "data_offset": 2048, 00:16:40.793 "data_size": 63488 00:16:40.793 }, 00:16:40.793 { 00:16:40.793 "name": null, 00:16:40.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.793 "is_configured": false, 00:16:40.793 "data_offset": 2048, 00:16:40.793 "data_size": 63488 00:16:40.793 }, 00:16:40.793 { 00:16:40.793 "name": null, 00:16:40.793 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:40.793 "is_configured": false, 00:16:40.793 "data_offset": 2048, 00:16:40.793 "data_size": 63488 00:16:40.793 } 00:16:40.793 ] 00:16:40.793 }' 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.793 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.053 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.054 [2024-11-29 07:48:30.915582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:41.054 [2024-11-29 07:48:30.915640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.054 [2024-11-29 07:48:30.915661] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:41.054 [2024-11-29 07:48:30.915671] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.054 [2024-11-29 07:48:30.916050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.054 [2024-11-29 07:48:30.916076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:41.054 [2024-11-29 07:48:30.916154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:41.054 [2024-11-29 07:48:30.916181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.054 pt2 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.054 [2024-11-29 07:48:30.927580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.054 "name": "raid_bdev1", 00:16:41.054 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:41.054 "strip_size_kb": 64, 00:16:41.054 "state": "configuring", 00:16:41.054 "raid_level": "raid5f", 00:16:41.054 "superblock": true, 00:16:41.054 "num_base_bdevs": 4, 00:16:41.054 "num_base_bdevs_discovered": 1, 00:16:41.054 "num_base_bdevs_operational": 4, 00:16:41.054 "base_bdevs_list": [ 00:16:41.054 { 00:16:41.054 "name": "pt1", 00:16:41.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:41.054 "is_configured": true, 00:16:41.054 "data_offset": 2048, 00:16:41.054 "data_size": 63488 00:16:41.054 }, 00:16:41.054 { 00:16:41.054 "name": null, 00:16:41.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.054 "is_configured": false, 00:16:41.054 "data_offset": 0, 00:16:41.054 "data_size": 63488 00:16:41.054 }, 00:16:41.054 { 00:16:41.054 "name": null, 00:16:41.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.054 "is_configured": false, 00:16:41.054 "data_offset": 2048, 00:16:41.054 "data_size": 63488 00:16:41.054 }, 00:16:41.054 { 00:16:41.054 "name": null, 00:16:41.054 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:41.054 "is_configured": false, 00:16:41.054 "data_offset": 2048, 00:16:41.054 "data_size": 63488 00:16:41.054 } 00:16:41.054 ] 00:16:41.054 }' 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.054 07:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.624 [2024-11-29 07:48:31.378786] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:41.624 [2024-11-29 07:48:31.378864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.624 [2024-11-29 07:48:31.378882] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:41.624 [2024-11-29 07:48:31.378890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.624 [2024-11-29 07:48:31.379315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.624 [2024-11-29 07:48:31.379341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:41.624 [2024-11-29 07:48:31.379417] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:41.624 [2024-11-29 07:48:31.379435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.624 pt2 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.624 [2024-11-29 07:48:31.390752] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:41.624 [2024-11-29 07:48:31.390814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.624 [2024-11-29 07:48:31.390835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:41.624 [2024-11-29 07:48:31.390845] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.624 [2024-11-29 07:48:31.391207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.624 [2024-11-29 07:48:31.391230] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:41.624 [2024-11-29 07:48:31.391288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:41.624 [2024-11-29 07:48:31.391311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:41.624 pt3 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.624 [2024-11-29 07:48:31.402707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:41.624 [2024-11-29 07:48:31.402747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.624 [2024-11-29 07:48:31.402777] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:41.624 [2024-11-29 07:48:31.402784] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.624 [2024-11-29 07:48:31.403142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.624 [2024-11-29 07:48:31.403165] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:41.624 [2024-11-29 07:48:31.403226] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:41.624 [2024-11-29 07:48:31.403245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:41.624 [2024-11-29 07:48:31.403382] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:41.624 [2024-11-29 07:48:31.403394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:41.624 [2024-11-29 07:48:31.403608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:41.624 [2024-11-29 07:48:31.410272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:41.624 [2024-11-29 07:48:31.410309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:41.624 [2024-11-29 07:48:31.410517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.624 pt4 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.624 "name": "raid_bdev1", 00:16:41.624 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:41.624 "strip_size_kb": 64, 00:16:41.624 "state": "online", 00:16:41.624 "raid_level": "raid5f", 00:16:41.624 "superblock": true, 00:16:41.624 "num_base_bdevs": 4, 00:16:41.624 "num_base_bdevs_discovered": 4, 00:16:41.624 "num_base_bdevs_operational": 4, 00:16:41.624 "base_bdevs_list": [ 00:16:41.624 { 00:16:41.624 "name": "pt1", 00:16:41.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:41.624 "is_configured": true, 00:16:41.624 "data_offset": 2048, 00:16:41.624 "data_size": 63488 00:16:41.624 }, 00:16:41.624 { 00:16:41.624 "name": "pt2", 00:16:41.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.624 "is_configured": true, 00:16:41.624 "data_offset": 2048, 00:16:41.624 "data_size": 63488 00:16:41.624 }, 00:16:41.624 { 00:16:41.624 "name": "pt3", 00:16:41.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.624 "is_configured": true, 00:16:41.624 "data_offset": 2048, 00:16:41.624 "data_size": 63488 00:16:41.624 }, 00:16:41.624 { 00:16:41.624 "name": "pt4", 00:16:41.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:41.624 "is_configured": true, 00:16:41.624 "data_offset": 2048, 00:16:41.624 "data_size": 63488 00:16:41.624 } 00:16:41.624 ] 00:16:41.624 }' 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.624 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.884 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:41.884 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:41.884 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:41.884 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:41.884 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:41.884 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.144 [2024-11-29 07:48:31.834412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:42.144 "name": "raid_bdev1", 00:16:42.144 "aliases": [ 00:16:42.144 "35895661-151f-4da5-888e-2ada5dcc2c48" 00:16:42.144 ], 00:16:42.144 "product_name": "Raid Volume", 00:16:42.144 "block_size": 512, 00:16:42.144 "num_blocks": 190464, 00:16:42.144 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:42.144 "assigned_rate_limits": { 00:16:42.144 "rw_ios_per_sec": 0, 00:16:42.144 "rw_mbytes_per_sec": 0, 00:16:42.144 "r_mbytes_per_sec": 0, 00:16:42.144 "w_mbytes_per_sec": 0 00:16:42.144 }, 00:16:42.144 "claimed": false, 00:16:42.144 "zoned": false, 00:16:42.144 "supported_io_types": { 00:16:42.144 "read": true, 00:16:42.144 "write": true, 00:16:42.144 "unmap": false, 00:16:42.144 "flush": false, 00:16:42.144 "reset": true, 00:16:42.144 "nvme_admin": false, 00:16:42.144 "nvme_io": false, 00:16:42.144 "nvme_io_md": false, 00:16:42.144 "write_zeroes": true, 00:16:42.144 "zcopy": false, 00:16:42.144 "get_zone_info": false, 00:16:42.144 "zone_management": false, 00:16:42.144 "zone_append": false, 00:16:42.144 "compare": false, 00:16:42.144 "compare_and_write": false, 00:16:42.144 "abort": false, 00:16:42.144 "seek_hole": false, 00:16:42.144 "seek_data": false, 00:16:42.144 "copy": false, 00:16:42.144 "nvme_iov_md": false 00:16:42.144 }, 00:16:42.144 "driver_specific": { 00:16:42.144 "raid": { 00:16:42.144 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:42.144 "strip_size_kb": 64, 00:16:42.144 "state": "online", 00:16:42.144 "raid_level": "raid5f", 00:16:42.144 "superblock": true, 00:16:42.144 "num_base_bdevs": 4, 00:16:42.144 "num_base_bdevs_discovered": 4, 00:16:42.144 "num_base_bdevs_operational": 4, 00:16:42.144 "base_bdevs_list": [ 00:16:42.144 { 00:16:42.144 "name": "pt1", 00:16:42.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:42.144 "is_configured": true, 00:16:42.144 "data_offset": 2048, 00:16:42.144 "data_size": 63488 00:16:42.144 }, 00:16:42.144 { 00:16:42.144 "name": "pt2", 00:16:42.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.144 "is_configured": true, 00:16:42.144 "data_offset": 2048, 00:16:42.144 "data_size": 63488 00:16:42.144 }, 00:16:42.144 { 00:16:42.144 "name": "pt3", 00:16:42.144 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.144 "is_configured": true, 00:16:42.144 "data_offset": 2048, 00:16:42.144 "data_size": 63488 00:16:42.144 }, 00:16:42.144 { 00:16:42.144 "name": "pt4", 00:16:42.144 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:42.144 "is_configured": true, 00:16:42.144 "data_offset": 2048, 00:16:42.144 "data_size": 63488 00:16:42.144 } 00:16:42.144 ] 00:16:42.144 } 00:16:42.144 } 00:16:42.144 }' 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:42.144 pt2 00:16:42.144 pt3 00:16:42.144 pt4' 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.144 07:48:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.144 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.405 [2024-11-29 07:48:32.121887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 35895661-151f-4da5-888e-2ada5dcc2c48 '!=' 35895661-151f-4da5-888e-2ada5dcc2c48 ']' 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.405 [2024-11-29 07:48:32.153720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.405 "name": "raid_bdev1", 00:16:42.405 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:42.405 "strip_size_kb": 64, 00:16:42.405 "state": "online", 00:16:42.405 "raid_level": "raid5f", 00:16:42.405 "superblock": true, 00:16:42.405 "num_base_bdevs": 4, 00:16:42.405 "num_base_bdevs_discovered": 3, 00:16:42.405 "num_base_bdevs_operational": 3, 00:16:42.405 "base_bdevs_list": [ 00:16:42.405 { 00:16:42.405 "name": null, 00:16:42.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.405 "is_configured": false, 00:16:42.405 "data_offset": 0, 00:16:42.405 "data_size": 63488 00:16:42.405 }, 00:16:42.405 { 00:16:42.405 "name": "pt2", 00:16:42.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.405 "is_configured": true, 00:16:42.405 "data_offset": 2048, 00:16:42.405 "data_size": 63488 00:16:42.405 }, 00:16:42.405 { 00:16:42.405 "name": "pt3", 00:16:42.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.405 "is_configured": true, 00:16:42.405 "data_offset": 2048, 00:16:42.405 "data_size": 63488 00:16:42.405 }, 00:16:42.405 { 00:16:42.405 "name": "pt4", 00:16:42.405 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:42.405 "is_configured": true, 00:16:42.405 "data_offset": 2048, 00:16:42.405 "data_size": 63488 00:16:42.405 } 00:16:42.405 ] 00:16:42.405 }' 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.405 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.666 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:42.666 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.666 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.666 [2024-11-29 07:48:32.573015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.666 [2024-11-29 07:48:32.573046] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.666 [2024-11-29 07:48:32.573130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.666 [2024-11-29 07:48:32.573237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.666 [2024-11-29 07:48:32.573250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:42.666 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.666 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.666 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.666 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.666 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:42.666 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.925 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.926 [2024-11-29 07:48:32.668851] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:42.926 [2024-11-29 07:48:32.668899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.926 [2024-11-29 07:48:32.668916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:42.926 [2024-11-29 07:48:32.668924] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.926 [2024-11-29 07:48:32.671040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.926 [2024-11-29 07:48:32.671077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:42.926 [2024-11-29 07:48:32.671176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:42.926 [2024-11-29 07:48:32.671218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:42.926 pt2 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.926 "name": "raid_bdev1", 00:16:42.926 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:42.926 "strip_size_kb": 64, 00:16:42.926 "state": "configuring", 00:16:42.926 "raid_level": "raid5f", 00:16:42.926 "superblock": true, 00:16:42.926 "num_base_bdevs": 4, 00:16:42.926 "num_base_bdevs_discovered": 1, 00:16:42.926 "num_base_bdevs_operational": 3, 00:16:42.926 "base_bdevs_list": [ 00:16:42.926 { 00:16:42.926 "name": null, 00:16:42.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.926 "is_configured": false, 00:16:42.926 "data_offset": 2048, 00:16:42.926 "data_size": 63488 00:16:42.926 }, 00:16:42.926 { 00:16:42.926 "name": "pt2", 00:16:42.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.926 "is_configured": true, 00:16:42.926 "data_offset": 2048, 00:16:42.926 "data_size": 63488 00:16:42.926 }, 00:16:42.926 { 00:16:42.926 "name": null, 00:16:42.926 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.926 "is_configured": false, 00:16:42.926 "data_offset": 2048, 00:16:42.926 "data_size": 63488 00:16:42.926 }, 00:16:42.926 { 00:16:42.926 "name": null, 00:16:42.926 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:42.926 "is_configured": false, 00:16:42.926 "data_offset": 2048, 00:16:42.926 "data_size": 63488 00:16:42.926 } 00:16:42.926 ] 00:16:42.926 }' 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.926 07:48:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.186 [2024-11-29 07:48:33.048201] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:43.186 [2024-11-29 07:48:33.048265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.186 [2024-11-29 07:48:33.048286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:43.186 [2024-11-29 07:48:33.048294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.186 [2024-11-29 07:48:33.048693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.186 [2024-11-29 07:48:33.048719] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:43.186 [2024-11-29 07:48:33.048792] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:43.186 [2024-11-29 07:48:33.048813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:43.186 pt3 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.186 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.187 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.187 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.187 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.187 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.187 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.187 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.187 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.187 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.187 "name": "raid_bdev1", 00:16:43.187 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:43.187 "strip_size_kb": 64, 00:16:43.187 "state": "configuring", 00:16:43.187 "raid_level": "raid5f", 00:16:43.187 "superblock": true, 00:16:43.187 "num_base_bdevs": 4, 00:16:43.187 "num_base_bdevs_discovered": 2, 00:16:43.187 "num_base_bdevs_operational": 3, 00:16:43.187 "base_bdevs_list": [ 00:16:43.187 { 00:16:43.187 "name": null, 00:16:43.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.187 "is_configured": false, 00:16:43.187 "data_offset": 2048, 00:16:43.187 "data_size": 63488 00:16:43.187 }, 00:16:43.187 { 00:16:43.187 "name": "pt2", 00:16:43.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.187 "is_configured": true, 00:16:43.187 "data_offset": 2048, 00:16:43.187 "data_size": 63488 00:16:43.187 }, 00:16:43.187 { 00:16:43.187 "name": "pt3", 00:16:43.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.187 "is_configured": true, 00:16:43.187 "data_offset": 2048, 00:16:43.187 "data_size": 63488 00:16:43.187 }, 00:16:43.187 { 00:16:43.187 "name": null, 00:16:43.187 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:43.187 "is_configured": false, 00:16:43.187 "data_offset": 2048, 00:16:43.187 "data_size": 63488 00:16:43.187 } 00:16:43.187 ] 00:16:43.187 }' 00:16:43.187 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.187 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.756 [2024-11-29 07:48:33.455669] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:43.756 [2024-11-29 07:48:33.455727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.756 [2024-11-29 07:48:33.455747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:43.756 [2024-11-29 07:48:33.455756] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.756 [2024-11-29 07:48:33.456219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.756 [2024-11-29 07:48:33.456251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:43.756 [2024-11-29 07:48:33.456331] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:43.756 [2024-11-29 07:48:33.456358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:43.756 [2024-11-29 07:48:33.456490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:43.756 [2024-11-29 07:48:33.456510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:43.756 [2024-11-29 07:48:33.456756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:43.756 [2024-11-29 07:48:33.463854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:43.756 [2024-11-29 07:48:33.463888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:43.756 [2024-11-29 07:48:33.464204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.756 pt4 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.756 "name": "raid_bdev1", 00:16:43.756 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:43.756 "strip_size_kb": 64, 00:16:43.756 "state": "online", 00:16:43.756 "raid_level": "raid5f", 00:16:43.756 "superblock": true, 00:16:43.756 "num_base_bdevs": 4, 00:16:43.756 "num_base_bdevs_discovered": 3, 00:16:43.756 "num_base_bdevs_operational": 3, 00:16:43.756 "base_bdevs_list": [ 00:16:43.756 { 00:16:43.756 "name": null, 00:16:43.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.756 "is_configured": false, 00:16:43.756 "data_offset": 2048, 00:16:43.756 "data_size": 63488 00:16:43.756 }, 00:16:43.756 { 00:16:43.756 "name": "pt2", 00:16:43.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.756 "is_configured": true, 00:16:43.756 "data_offset": 2048, 00:16:43.756 "data_size": 63488 00:16:43.756 }, 00:16:43.756 { 00:16:43.756 "name": "pt3", 00:16:43.756 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.756 "is_configured": true, 00:16:43.756 "data_offset": 2048, 00:16:43.756 "data_size": 63488 00:16:43.756 }, 00:16:43.756 { 00:16:43.756 "name": "pt4", 00:16:43.756 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:43.756 "is_configured": true, 00:16:43.756 "data_offset": 2048, 00:16:43.756 "data_size": 63488 00:16:43.756 } 00:16:43.756 ] 00:16:43.756 }' 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.756 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.016 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:44.016 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.016 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.017 [2024-11-29 07:48:33.892689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.017 [2024-11-29 07:48:33.892716] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.017 [2024-11-29 07:48:33.892778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.017 [2024-11-29 07:48:33.892845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.017 [2024-11-29 07:48:33.892857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.017 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.277 [2024-11-29 07:48:33.964570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:44.277 [2024-11-29 07:48:33.964640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.277 [2024-11-29 07:48:33.964667] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:44.277 [2024-11-29 07:48:33.964682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.277 [2024-11-29 07:48:33.966940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.277 [2024-11-29 07:48:33.966980] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:44.277 [2024-11-29 07:48:33.967055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:44.277 [2024-11-29 07:48:33.967113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:44.277 [2024-11-29 07:48:33.967237] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:44.277 [2024-11-29 07:48:33.967250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.277 [2024-11-29 07:48:33.967264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:44.277 [2024-11-29 07:48:33.967334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:44.277 [2024-11-29 07:48:33.967435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:44.277 pt1 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.277 07:48:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.277 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.277 "name": "raid_bdev1", 00:16:44.277 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:44.277 "strip_size_kb": 64, 00:16:44.277 "state": "configuring", 00:16:44.277 "raid_level": "raid5f", 00:16:44.277 "superblock": true, 00:16:44.277 "num_base_bdevs": 4, 00:16:44.277 "num_base_bdevs_discovered": 2, 00:16:44.277 "num_base_bdevs_operational": 3, 00:16:44.277 "base_bdevs_list": [ 00:16:44.277 { 00:16:44.277 "name": null, 00:16:44.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.277 "is_configured": false, 00:16:44.277 "data_offset": 2048, 00:16:44.277 "data_size": 63488 00:16:44.277 }, 00:16:44.277 { 00:16:44.277 "name": "pt2", 00:16:44.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.277 "is_configured": true, 00:16:44.277 "data_offset": 2048, 00:16:44.277 "data_size": 63488 00:16:44.277 }, 00:16:44.277 { 00:16:44.277 "name": "pt3", 00:16:44.277 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.277 "is_configured": true, 00:16:44.277 "data_offset": 2048, 00:16:44.277 "data_size": 63488 00:16:44.277 }, 00:16:44.277 { 00:16:44.277 "name": null, 00:16:44.277 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:44.277 "is_configured": false, 00:16:44.277 "data_offset": 2048, 00:16:44.277 "data_size": 63488 00:16:44.277 } 00:16:44.277 ] 00:16:44.277 }' 00:16:44.277 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.277 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.537 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.538 [2024-11-29 07:48:34.439910] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:44.538 [2024-11-29 07:48:34.439981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.538 [2024-11-29 07:48:34.440003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:44.538 [2024-11-29 07:48:34.440013] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.538 [2024-11-29 07:48:34.440501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.538 [2024-11-29 07:48:34.440534] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:44.538 [2024-11-29 07:48:34.440620] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:44.538 [2024-11-29 07:48:34.440643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:44.538 [2024-11-29 07:48:34.440804] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:44.538 [2024-11-29 07:48:34.440822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:44.538 [2024-11-29 07:48:34.441116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:44.538 [2024-11-29 07:48:34.448925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:44.538 [2024-11-29 07:48:34.448953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:44.538 [2024-11-29 07:48:34.449232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.538 pt4 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.538 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.798 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.798 "name": "raid_bdev1", 00:16:44.798 "uuid": "35895661-151f-4da5-888e-2ada5dcc2c48", 00:16:44.798 "strip_size_kb": 64, 00:16:44.798 "state": "online", 00:16:44.798 "raid_level": "raid5f", 00:16:44.798 "superblock": true, 00:16:44.798 "num_base_bdevs": 4, 00:16:44.798 "num_base_bdevs_discovered": 3, 00:16:44.798 "num_base_bdevs_operational": 3, 00:16:44.798 "base_bdevs_list": [ 00:16:44.798 { 00:16:44.798 "name": null, 00:16:44.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.798 "is_configured": false, 00:16:44.798 "data_offset": 2048, 00:16:44.798 "data_size": 63488 00:16:44.798 }, 00:16:44.798 { 00:16:44.798 "name": "pt2", 00:16:44.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.798 "is_configured": true, 00:16:44.798 "data_offset": 2048, 00:16:44.798 "data_size": 63488 00:16:44.798 }, 00:16:44.798 { 00:16:44.798 "name": "pt3", 00:16:44.798 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.798 "is_configured": true, 00:16:44.798 "data_offset": 2048, 00:16:44.798 "data_size": 63488 00:16:44.798 }, 00:16:44.798 { 00:16:44.798 "name": "pt4", 00:16:44.798 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:44.798 "is_configured": true, 00:16:44.798 "data_offset": 2048, 00:16:44.798 "data_size": 63488 00:16:44.798 } 00:16:44.798 ] 00:16:44.798 }' 00:16:44.798 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.798 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.057 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:45.057 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:45.057 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.057 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.057 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.057 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:45.057 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:45.057 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:45.058 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.058 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.058 [2024-11-29 07:48:34.961799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.058 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.058 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 35895661-151f-4da5-888e-2ada5dcc2c48 '!=' 35895661-151f-4da5-888e-2ada5dcc2c48 ']' 00:16:45.058 07:48:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83785 00:16:45.058 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83785 ']' 00:16:45.058 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83785 00:16:45.058 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:45.058 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.058 07:48:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83785 00:16:45.316 07:48:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.316 07:48:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.316 killing process with pid 83785 00:16:45.316 07:48:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83785' 00:16:45.316 07:48:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83785 00:16:45.316 [2024-11-29 07:48:35.028449] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:45.316 [2024-11-29 07:48:35.028536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.316 [2024-11-29 07:48:35.028617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.316 [2024-11-29 07:48:35.028633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:45.316 07:48:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83785 00:16:45.573 [2024-11-29 07:48:35.402913] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.521 07:48:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:46.521 00:16:46.521 real 0m8.133s 00:16:46.521 user 0m12.707s 00:16:46.521 sys 0m1.521s 00:16:46.521 07:48:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.521 07:48:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.521 ************************************ 00:16:46.521 END TEST raid5f_superblock_test 00:16:46.521 ************************************ 00:16:46.780 07:48:36 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:46.780 07:48:36 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:46.780 07:48:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:46.780 07:48:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.780 07:48:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.780 ************************************ 00:16:46.780 START TEST raid5f_rebuild_test 00:16:46.780 ************************************ 00:16:46.780 07:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:46.780 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:46.780 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:46.780 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:46.780 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:46.780 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:46.780 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:46.780 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84266 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84266 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84266 ']' 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.781 07:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.781 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:46.781 Zero copy mechanism will not be used. 00:16:46.781 [2024-11-29 07:48:36.628414] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:46.781 [2024-11-29 07:48:36.628517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84266 ] 00:16:47.040 [2024-11-29 07:48:36.800215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.041 [2024-11-29 07:48:36.906210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.306 [2024-11-29 07:48:37.097640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.306 [2024-11-29 07:48:37.097700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.579 BaseBdev1_malloc 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.579 [2024-11-29 07:48:37.488740] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:47.579 [2024-11-29 07:48:37.488799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.579 [2024-11-29 07:48:37.488822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:47.579 [2024-11-29 07:48:37.488833] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.579 [2024-11-29 07:48:37.490823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.579 [2024-11-29 07:48:37.490881] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:47.579 BaseBdev1 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.579 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.878 BaseBdev2_malloc 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.878 [2024-11-29 07:48:37.542321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:47.878 [2024-11-29 07:48:37.542378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.878 [2024-11-29 07:48:37.542400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:47.878 [2024-11-29 07:48:37.542409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.878 [2024-11-29 07:48:37.544382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.878 [2024-11-29 07:48:37.544418] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:47.878 BaseBdev2 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.878 BaseBdev3_malloc 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.878 [2024-11-29 07:48:37.634167] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:47.878 [2024-11-29 07:48:37.634231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.878 [2024-11-29 07:48:37.634252] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:47.878 [2024-11-29 07:48:37.634263] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.878 [2024-11-29 07:48:37.636256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.878 [2024-11-29 07:48:37.636295] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:47.878 BaseBdev3 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.878 BaseBdev4_malloc 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.878 [2024-11-29 07:48:37.682695] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:47.878 [2024-11-29 07:48:37.682764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.878 [2024-11-29 07:48:37.682783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:47.878 [2024-11-29 07:48:37.682793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.878 [2024-11-29 07:48:37.684765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.878 [2024-11-29 07:48:37.684803] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:47.878 BaseBdev4 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.878 spare_malloc 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.878 spare_delay 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.878 [2024-11-29 07:48:37.748045] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:47.878 [2024-11-29 07:48:37.748095] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.878 [2024-11-29 07:48:37.748128] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:47.878 [2024-11-29 07:48:37.748138] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.878 [2024-11-29 07:48:37.750260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.878 [2024-11-29 07:48:37.750292] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:47.878 spare 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.878 [2024-11-29 07:48:37.760073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.878 [2024-11-29 07:48:37.761889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:47.878 [2024-11-29 07:48:37.761970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.878 [2024-11-29 07:48:37.762019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:47.878 [2024-11-29 07:48:37.762102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:47.878 [2024-11-29 07:48:37.762126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:47.878 [2024-11-29 07:48:37.762364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:47.878 [2024-11-29 07:48:37.769906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:47.878 [2024-11-29 07:48:37.769929] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:47.878 [2024-11-29 07:48:37.770130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.878 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.879 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.879 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.879 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.879 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.879 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.152 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.153 "name": "raid_bdev1", 00:16:48.153 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:48.153 "strip_size_kb": 64, 00:16:48.153 "state": "online", 00:16:48.153 "raid_level": "raid5f", 00:16:48.153 "superblock": false, 00:16:48.153 "num_base_bdevs": 4, 00:16:48.153 "num_base_bdevs_discovered": 4, 00:16:48.153 "num_base_bdevs_operational": 4, 00:16:48.153 "base_bdevs_list": [ 00:16:48.153 { 00:16:48.153 "name": "BaseBdev1", 00:16:48.153 "uuid": "3518a961-b969-5850-8250-b94e187c3109", 00:16:48.153 "is_configured": true, 00:16:48.153 "data_offset": 0, 00:16:48.153 "data_size": 65536 00:16:48.153 }, 00:16:48.153 { 00:16:48.153 "name": "BaseBdev2", 00:16:48.153 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:48.153 "is_configured": true, 00:16:48.153 "data_offset": 0, 00:16:48.153 "data_size": 65536 00:16:48.153 }, 00:16:48.153 { 00:16:48.153 "name": "BaseBdev3", 00:16:48.153 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:48.153 "is_configured": true, 00:16:48.153 "data_offset": 0, 00:16:48.153 "data_size": 65536 00:16:48.153 }, 00:16:48.153 { 00:16:48.153 "name": "BaseBdev4", 00:16:48.153 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:48.153 "is_configured": true, 00:16:48.153 "data_offset": 0, 00:16:48.153 "data_size": 65536 00:16:48.153 } 00:16:48.153 ] 00:16:48.153 }' 00:16:48.153 07:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.153 07:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.411 [2024-11-29 07:48:38.242071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:48.411 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:48.412 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:48.412 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:48.412 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:48.412 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:48.412 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:48.412 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:48.412 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:48.412 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:48.412 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:48.412 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:48.412 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:48.412 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:48.671 [2024-11-29 07:48:38.489479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:48.672 /dev/nbd0 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:48.672 1+0 records in 00:16:48.672 1+0 records out 00:16:48.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040674 s, 10.1 MB/s 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:48.672 07:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:49.240 512+0 records in 00:16:49.240 512+0 records out 00:16:49.240 100663296 bytes (101 MB, 96 MiB) copied, 0.468084 s, 215 MB/s 00:16:49.240 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:49.240 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.240 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:49.240 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:49.240 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:49.240 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.240 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:49.500 [2024-11-29 07:48:39.241763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.500 [2024-11-29 07:48:39.251879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.500 "name": "raid_bdev1", 00:16:49.500 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:49.500 "strip_size_kb": 64, 00:16:49.500 "state": "online", 00:16:49.500 "raid_level": "raid5f", 00:16:49.500 "superblock": false, 00:16:49.500 "num_base_bdevs": 4, 00:16:49.500 "num_base_bdevs_discovered": 3, 00:16:49.500 "num_base_bdevs_operational": 3, 00:16:49.500 "base_bdevs_list": [ 00:16:49.500 { 00:16:49.500 "name": null, 00:16:49.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.500 "is_configured": false, 00:16:49.500 "data_offset": 0, 00:16:49.500 "data_size": 65536 00:16:49.500 }, 00:16:49.500 { 00:16:49.500 "name": "BaseBdev2", 00:16:49.500 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:49.500 "is_configured": true, 00:16:49.500 "data_offset": 0, 00:16:49.500 "data_size": 65536 00:16:49.500 }, 00:16:49.500 { 00:16:49.500 "name": "BaseBdev3", 00:16:49.500 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:49.500 "is_configured": true, 00:16:49.500 "data_offset": 0, 00:16:49.500 "data_size": 65536 00:16:49.500 }, 00:16:49.500 { 00:16:49.500 "name": "BaseBdev4", 00:16:49.500 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:49.500 "is_configured": true, 00:16:49.500 "data_offset": 0, 00:16:49.500 "data_size": 65536 00:16:49.500 } 00:16:49.500 ] 00:16:49.500 }' 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.500 07:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.760 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:49.760 07:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.760 07:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.760 [2024-11-29 07:48:39.671151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:49.760 [2024-11-29 07:48:39.685517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:49.760 07:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.760 07:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:49.760 [2024-11-29 07:48:39.694253] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.141 "name": "raid_bdev1", 00:16:51.141 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:51.141 "strip_size_kb": 64, 00:16:51.141 "state": "online", 00:16:51.141 "raid_level": "raid5f", 00:16:51.141 "superblock": false, 00:16:51.141 "num_base_bdevs": 4, 00:16:51.141 "num_base_bdevs_discovered": 4, 00:16:51.141 "num_base_bdevs_operational": 4, 00:16:51.141 "process": { 00:16:51.141 "type": "rebuild", 00:16:51.141 "target": "spare", 00:16:51.141 "progress": { 00:16:51.141 "blocks": 19200, 00:16:51.141 "percent": 9 00:16:51.141 } 00:16:51.141 }, 00:16:51.141 "base_bdevs_list": [ 00:16:51.141 { 00:16:51.141 "name": "spare", 00:16:51.141 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:16:51.141 "is_configured": true, 00:16:51.141 "data_offset": 0, 00:16:51.141 "data_size": 65536 00:16:51.141 }, 00:16:51.141 { 00:16:51.141 "name": "BaseBdev2", 00:16:51.141 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:51.141 "is_configured": true, 00:16:51.141 "data_offset": 0, 00:16:51.141 "data_size": 65536 00:16:51.141 }, 00:16:51.141 { 00:16:51.141 "name": "BaseBdev3", 00:16:51.141 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:51.141 "is_configured": true, 00:16:51.141 "data_offset": 0, 00:16:51.141 "data_size": 65536 00:16:51.141 }, 00:16:51.141 { 00:16:51.141 "name": "BaseBdev4", 00:16:51.141 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:51.141 "is_configured": true, 00:16:51.141 "data_offset": 0, 00:16:51.141 "data_size": 65536 00:16:51.141 } 00:16:51.141 ] 00:16:51.141 }' 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.141 [2024-11-29 07:48:40.845207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.141 [2024-11-29 07:48:40.900299] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:51.141 [2024-11-29 07:48:40.900374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.141 [2024-11-29 07:48:40.900391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.141 [2024-11-29 07:48:40.900400] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.141 "name": "raid_bdev1", 00:16:51.141 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:51.141 "strip_size_kb": 64, 00:16:51.141 "state": "online", 00:16:51.141 "raid_level": "raid5f", 00:16:51.141 "superblock": false, 00:16:51.141 "num_base_bdevs": 4, 00:16:51.141 "num_base_bdevs_discovered": 3, 00:16:51.141 "num_base_bdevs_operational": 3, 00:16:51.141 "base_bdevs_list": [ 00:16:51.141 { 00:16:51.141 "name": null, 00:16:51.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.141 "is_configured": false, 00:16:51.141 "data_offset": 0, 00:16:51.141 "data_size": 65536 00:16:51.141 }, 00:16:51.141 { 00:16:51.141 "name": "BaseBdev2", 00:16:51.141 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:51.141 "is_configured": true, 00:16:51.141 "data_offset": 0, 00:16:51.141 "data_size": 65536 00:16:51.141 }, 00:16:51.141 { 00:16:51.141 "name": "BaseBdev3", 00:16:51.141 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:51.141 "is_configured": true, 00:16:51.141 "data_offset": 0, 00:16:51.141 "data_size": 65536 00:16:51.141 }, 00:16:51.141 { 00:16:51.141 "name": "BaseBdev4", 00:16:51.141 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:51.141 "is_configured": true, 00:16:51.141 "data_offset": 0, 00:16:51.141 "data_size": 65536 00:16:51.141 } 00:16:51.141 ] 00:16:51.141 }' 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.141 07:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.710 "name": "raid_bdev1", 00:16:51.710 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:51.710 "strip_size_kb": 64, 00:16:51.710 "state": "online", 00:16:51.710 "raid_level": "raid5f", 00:16:51.710 "superblock": false, 00:16:51.710 "num_base_bdevs": 4, 00:16:51.710 "num_base_bdevs_discovered": 3, 00:16:51.710 "num_base_bdevs_operational": 3, 00:16:51.710 "base_bdevs_list": [ 00:16:51.710 { 00:16:51.710 "name": null, 00:16:51.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.710 "is_configured": false, 00:16:51.710 "data_offset": 0, 00:16:51.710 "data_size": 65536 00:16:51.710 }, 00:16:51.710 { 00:16:51.710 "name": "BaseBdev2", 00:16:51.710 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:51.710 "is_configured": true, 00:16:51.710 "data_offset": 0, 00:16:51.710 "data_size": 65536 00:16:51.710 }, 00:16:51.710 { 00:16:51.710 "name": "BaseBdev3", 00:16:51.710 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:51.710 "is_configured": true, 00:16:51.710 "data_offset": 0, 00:16:51.710 "data_size": 65536 00:16:51.710 }, 00:16:51.710 { 00:16:51.710 "name": "BaseBdev4", 00:16:51.710 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:51.710 "is_configured": true, 00:16:51.710 "data_offset": 0, 00:16:51.710 "data_size": 65536 00:16:51.710 } 00:16:51.710 ] 00:16:51.710 }' 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.710 [2024-11-29 07:48:41.536541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.710 [2024-11-29 07:48:41.551276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.710 07:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:51.710 [2024-11-29 07:48:41.560593] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:52.647 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.647 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.647 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.647 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.647 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.647 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.647 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.647 07:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.647 07:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.647 07:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.906 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.906 "name": "raid_bdev1", 00:16:52.906 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:52.906 "strip_size_kb": 64, 00:16:52.906 "state": "online", 00:16:52.906 "raid_level": "raid5f", 00:16:52.906 "superblock": false, 00:16:52.906 "num_base_bdevs": 4, 00:16:52.906 "num_base_bdevs_discovered": 4, 00:16:52.906 "num_base_bdevs_operational": 4, 00:16:52.906 "process": { 00:16:52.906 "type": "rebuild", 00:16:52.906 "target": "spare", 00:16:52.906 "progress": { 00:16:52.906 "blocks": 19200, 00:16:52.906 "percent": 9 00:16:52.906 } 00:16:52.906 }, 00:16:52.906 "base_bdevs_list": [ 00:16:52.906 { 00:16:52.906 "name": "spare", 00:16:52.906 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:16:52.906 "is_configured": true, 00:16:52.906 "data_offset": 0, 00:16:52.906 "data_size": 65536 00:16:52.906 }, 00:16:52.906 { 00:16:52.906 "name": "BaseBdev2", 00:16:52.906 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:52.906 "is_configured": true, 00:16:52.906 "data_offset": 0, 00:16:52.906 "data_size": 65536 00:16:52.906 }, 00:16:52.906 { 00:16:52.906 "name": "BaseBdev3", 00:16:52.906 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:52.906 "is_configured": true, 00:16:52.906 "data_offset": 0, 00:16:52.906 "data_size": 65536 00:16:52.906 }, 00:16:52.906 { 00:16:52.906 "name": "BaseBdev4", 00:16:52.906 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:52.906 "is_configured": true, 00:16:52.906 "data_offset": 0, 00:16:52.906 "data_size": 65536 00:16:52.906 } 00:16:52.906 ] 00:16:52.906 }' 00:16:52.906 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.906 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=601 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.907 "name": "raid_bdev1", 00:16:52.907 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:52.907 "strip_size_kb": 64, 00:16:52.907 "state": "online", 00:16:52.907 "raid_level": "raid5f", 00:16:52.907 "superblock": false, 00:16:52.907 "num_base_bdevs": 4, 00:16:52.907 "num_base_bdevs_discovered": 4, 00:16:52.907 "num_base_bdevs_operational": 4, 00:16:52.907 "process": { 00:16:52.907 "type": "rebuild", 00:16:52.907 "target": "spare", 00:16:52.907 "progress": { 00:16:52.907 "blocks": 21120, 00:16:52.907 "percent": 10 00:16:52.907 } 00:16:52.907 }, 00:16:52.907 "base_bdevs_list": [ 00:16:52.907 { 00:16:52.907 "name": "spare", 00:16:52.907 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:16:52.907 "is_configured": true, 00:16:52.907 "data_offset": 0, 00:16:52.907 "data_size": 65536 00:16:52.907 }, 00:16:52.907 { 00:16:52.907 "name": "BaseBdev2", 00:16:52.907 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:52.907 "is_configured": true, 00:16:52.907 "data_offset": 0, 00:16:52.907 "data_size": 65536 00:16:52.907 }, 00:16:52.907 { 00:16:52.907 "name": "BaseBdev3", 00:16:52.907 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:52.907 "is_configured": true, 00:16:52.907 "data_offset": 0, 00:16:52.907 "data_size": 65536 00:16:52.907 }, 00:16:52.907 { 00:16:52.907 "name": "BaseBdev4", 00:16:52.907 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:52.907 "is_configured": true, 00:16:52.907 "data_offset": 0, 00:16:52.907 "data_size": 65536 00:16:52.907 } 00:16:52.907 ] 00:16:52.907 }' 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.907 07:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.284 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.284 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.284 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.284 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.284 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.285 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.285 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.285 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.285 07:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.285 07:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.285 07:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.285 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.285 "name": "raid_bdev1", 00:16:54.285 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:54.285 "strip_size_kb": 64, 00:16:54.285 "state": "online", 00:16:54.285 "raid_level": "raid5f", 00:16:54.285 "superblock": false, 00:16:54.285 "num_base_bdevs": 4, 00:16:54.285 "num_base_bdevs_discovered": 4, 00:16:54.285 "num_base_bdevs_operational": 4, 00:16:54.285 "process": { 00:16:54.285 "type": "rebuild", 00:16:54.285 "target": "spare", 00:16:54.285 "progress": { 00:16:54.285 "blocks": 42240, 00:16:54.285 "percent": 21 00:16:54.285 } 00:16:54.285 }, 00:16:54.285 "base_bdevs_list": [ 00:16:54.285 { 00:16:54.285 "name": "spare", 00:16:54.285 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:16:54.285 "is_configured": true, 00:16:54.285 "data_offset": 0, 00:16:54.285 "data_size": 65536 00:16:54.285 }, 00:16:54.285 { 00:16:54.285 "name": "BaseBdev2", 00:16:54.285 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:54.285 "is_configured": true, 00:16:54.285 "data_offset": 0, 00:16:54.285 "data_size": 65536 00:16:54.285 }, 00:16:54.285 { 00:16:54.285 "name": "BaseBdev3", 00:16:54.285 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:54.285 "is_configured": true, 00:16:54.285 "data_offset": 0, 00:16:54.285 "data_size": 65536 00:16:54.285 }, 00:16:54.285 { 00:16:54.285 "name": "BaseBdev4", 00:16:54.285 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:54.285 "is_configured": true, 00:16:54.285 "data_offset": 0, 00:16:54.285 "data_size": 65536 00:16:54.285 } 00:16:54.285 ] 00:16:54.285 }' 00:16:54.285 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.285 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.285 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.285 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.285 07:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.221 07:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.221 07:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.221 07:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.221 07:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.221 07:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.221 07:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.221 07:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.221 07:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.221 07:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.221 07:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.221 07:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.221 07:48:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.221 "name": "raid_bdev1", 00:16:55.221 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:55.221 "strip_size_kb": 64, 00:16:55.221 "state": "online", 00:16:55.221 "raid_level": "raid5f", 00:16:55.221 "superblock": false, 00:16:55.221 "num_base_bdevs": 4, 00:16:55.221 "num_base_bdevs_discovered": 4, 00:16:55.221 "num_base_bdevs_operational": 4, 00:16:55.221 "process": { 00:16:55.221 "type": "rebuild", 00:16:55.221 "target": "spare", 00:16:55.221 "progress": { 00:16:55.221 "blocks": 63360, 00:16:55.221 "percent": 32 00:16:55.221 } 00:16:55.221 }, 00:16:55.221 "base_bdevs_list": [ 00:16:55.221 { 00:16:55.221 "name": "spare", 00:16:55.221 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:16:55.221 "is_configured": true, 00:16:55.221 "data_offset": 0, 00:16:55.221 "data_size": 65536 00:16:55.221 }, 00:16:55.221 { 00:16:55.221 "name": "BaseBdev2", 00:16:55.221 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:55.221 "is_configured": true, 00:16:55.221 "data_offset": 0, 00:16:55.221 "data_size": 65536 00:16:55.221 }, 00:16:55.221 { 00:16:55.221 "name": "BaseBdev3", 00:16:55.221 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:55.221 "is_configured": true, 00:16:55.221 "data_offset": 0, 00:16:55.221 "data_size": 65536 00:16:55.221 }, 00:16:55.221 { 00:16:55.221 "name": "BaseBdev4", 00:16:55.221 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:55.221 "is_configured": true, 00:16:55.221 "data_offset": 0, 00:16:55.221 "data_size": 65536 00:16:55.221 } 00:16:55.221 ] 00:16:55.221 }' 00:16:55.221 07:48:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.222 07:48:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.222 07:48:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.222 07:48:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.222 07:48:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.600 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.600 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.600 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.600 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.600 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.600 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.600 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.600 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.600 07:48:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.600 07:48:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.600 07:48:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.600 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.600 "name": "raid_bdev1", 00:16:56.600 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:56.600 "strip_size_kb": 64, 00:16:56.600 "state": "online", 00:16:56.600 "raid_level": "raid5f", 00:16:56.600 "superblock": false, 00:16:56.600 "num_base_bdevs": 4, 00:16:56.600 "num_base_bdevs_discovered": 4, 00:16:56.600 "num_base_bdevs_operational": 4, 00:16:56.601 "process": { 00:16:56.601 "type": "rebuild", 00:16:56.601 "target": "spare", 00:16:56.601 "progress": { 00:16:56.601 "blocks": 86400, 00:16:56.601 "percent": 43 00:16:56.601 } 00:16:56.601 }, 00:16:56.601 "base_bdevs_list": [ 00:16:56.601 { 00:16:56.601 "name": "spare", 00:16:56.601 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:16:56.601 "is_configured": true, 00:16:56.601 "data_offset": 0, 00:16:56.601 "data_size": 65536 00:16:56.601 }, 00:16:56.601 { 00:16:56.601 "name": "BaseBdev2", 00:16:56.601 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:56.601 "is_configured": true, 00:16:56.601 "data_offset": 0, 00:16:56.601 "data_size": 65536 00:16:56.601 }, 00:16:56.601 { 00:16:56.601 "name": "BaseBdev3", 00:16:56.601 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:56.601 "is_configured": true, 00:16:56.601 "data_offset": 0, 00:16:56.601 "data_size": 65536 00:16:56.601 }, 00:16:56.601 { 00:16:56.601 "name": "BaseBdev4", 00:16:56.601 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:56.601 "is_configured": true, 00:16:56.601 "data_offset": 0, 00:16:56.601 "data_size": 65536 00:16:56.601 } 00:16:56.601 ] 00:16:56.601 }' 00:16:56.601 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.601 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.601 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.601 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.601 07:48:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.538 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.538 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.538 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.538 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.538 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.538 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.539 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.539 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.539 07:48:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.539 07:48:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.539 07:48:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.539 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.539 "name": "raid_bdev1", 00:16:57.539 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:57.539 "strip_size_kb": 64, 00:16:57.539 "state": "online", 00:16:57.539 "raid_level": "raid5f", 00:16:57.539 "superblock": false, 00:16:57.539 "num_base_bdevs": 4, 00:16:57.539 "num_base_bdevs_discovered": 4, 00:16:57.539 "num_base_bdevs_operational": 4, 00:16:57.539 "process": { 00:16:57.539 "type": "rebuild", 00:16:57.539 "target": "spare", 00:16:57.539 "progress": { 00:16:57.539 "blocks": 107520, 00:16:57.539 "percent": 54 00:16:57.539 } 00:16:57.539 }, 00:16:57.539 "base_bdevs_list": [ 00:16:57.539 { 00:16:57.539 "name": "spare", 00:16:57.539 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:16:57.539 "is_configured": true, 00:16:57.539 "data_offset": 0, 00:16:57.539 "data_size": 65536 00:16:57.539 }, 00:16:57.539 { 00:16:57.539 "name": "BaseBdev2", 00:16:57.539 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:57.539 "is_configured": true, 00:16:57.539 "data_offset": 0, 00:16:57.539 "data_size": 65536 00:16:57.539 }, 00:16:57.539 { 00:16:57.539 "name": "BaseBdev3", 00:16:57.539 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:57.539 "is_configured": true, 00:16:57.539 "data_offset": 0, 00:16:57.539 "data_size": 65536 00:16:57.539 }, 00:16:57.539 { 00:16:57.539 "name": "BaseBdev4", 00:16:57.539 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:57.539 "is_configured": true, 00:16:57.539 "data_offset": 0, 00:16:57.539 "data_size": 65536 00:16:57.539 } 00:16:57.539 ] 00:16:57.539 }' 00:16:57.539 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.539 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.539 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.539 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.539 07:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.478 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.478 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.478 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.478 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.478 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.478 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.478 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.478 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.478 07:48:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.478 07:48:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.478 07:48:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.738 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.738 "name": "raid_bdev1", 00:16:58.738 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:58.738 "strip_size_kb": 64, 00:16:58.738 "state": "online", 00:16:58.738 "raid_level": "raid5f", 00:16:58.738 "superblock": false, 00:16:58.738 "num_base_bdevs": 4, 00:16:58.738 "num_base_bdevs_discovered": 4, 00:16:58.738 "num_base_bdevs_operational": 4, 00:16:58.738 "process": { 00:16:58.738 "type": "rebuild", 00:16:58.738 "target": "spare", 00:16:58.738 "progress": { 00:16:58.738 "blocks": 130560, 00:16:58.738 "percent": 66 00:16:58.738 } 00:16:58.738 }, 00:16:58.738 "base_bdevs_list": [ 00:16:58.738 { 00:16:58.738 "name": "spare", 00:16:58.738 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:16:58.738 "is_configured": true, 00:16:58.738 "data_offset": 0, 00:16:58.738 "data_size": 65536 00:16:58.738 }, 00:16:58.738 { 00:16:58.738 "name": "BaseBdev2", 00:16:58.738 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:58.738 "is_configured": true, 00:16:58.738 "data_offset": 0, 00:16:58.738 "data_size": 65536 00:16:58.738 }, 00:16:58.738 { 00:16:58.738 "name": "BaseBdev3", 00:16:58.738 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:58.738 "is_configured": true, 00:16:58.738 "data_offset": 0, 00:16:58.738 "data_size": 65536 00:16:58.738 }, 00:16:58.738 { 00:16:58.738 "name": "BaseBdev4", 00:16:58.738 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:58.738 "is_configured": true, 00:16:58.738 "data_offset": 0, 00:16:58.738 "data_size": 65536 00:16:58.738 } 00:16:58.738 ] 00:16:58.738 }' 00:16:58.738 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.738 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.738 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.738 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.738 07:48:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.675 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.675 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.675 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.675 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.675 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.675 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.676 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.676 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.676 07:48:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.676 07:48:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.676 07:48:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.676 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.676 "name": "raid_bdev1", 00:16:59.676 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:16:59.676 "strip_size_kb": 64, 00:16:59.676 "state": "online", 00:16:59.676 "raid_level": "raid5f", 00:16:59.676 "superblock": false, 00:16:59.676 "num_base_bdevs": 4, 00:16:59.676 "num_base_bdevs_discovered": 4, 00:16:59.676 "num_base_bdevs_operational": 4, 00:16:59.676 "process": { 00:16:59.676 "type": "rebuild", 00:16:59.676 "target": "spare", 00:16:59.676 "progress": { 00:16:59.676 "blocks": 151680, 00:16:59.676 "percent": 77 00:16:59.676 } 00:16:59.676 }, 00:16:59.676 "base_bdevs_list": [ 00:16:59.676 { 00:16:59.676 "name": "spare", 00:16:59.676 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:16:59.676 "is_configured": true, 00:16:59.676 "data_offset": 0, 00:16:59.676 "data_size": 65536 00:16:59.676 }, 00:16:59.676 { 00:16:59.676 "name": "BaseBdev2", 00:16:59.676 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:16:59.676 "is_configured": true, 00:16:59.676 "data_offset": 0, 00:16:59.676 "data_size": 65536 00:16:59.676 }, 00:16:59.676 { 00:16:59.676 "name": "BaseBdev3", 00:16:59.676 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:16:59.676 "is_configured": true, 00:16:59.676 "data_offset": 0, 00:16:59.676 "data_size": 65536 00:16:59.676 }, 00:16:59.676 { 00:16:59.676 "name": "BaseBdev4", 00:16:59.676 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:16:59.676 "is_configured": true, 00:16:59.676 "data_offset": 0, 00:16:59.676 "data_size": 65536 00:16:59.676 } 00:16:59.676 ] 00:16:59.676 }' 00:16:59.676 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.965 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.965 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.965 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.965 07:48:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.903 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.903 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.903 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.903 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.903 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.903 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.903 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.903 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.903 07:48:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.903 07:48:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.903 07:48:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.904 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.904 "name": "raid_bdev1", 00:17:00.904 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:17:00.904 "strip_size_kb": 64, 00:17:00.904 "state": "online", 00:17:00.904 "raid_level": "raid5f", 00:17:00.904 "superblock": false, 00:17:00.904 "num_base_bdevs": 4, 00:17:00.904 "num_base_bdevs_discovered": 4, 00:17:00.904 "num_base_bdevs_operational": 4, 00:17:00.904 "process": { 00:17:00.904 "type": "rebuild", 00:17:00.904 "target": "spare", 00:17:00.904 "progress": { 00:17:00.904 "blocks": 174720, 00:17:00.904 "percent": 88 00:17:00.904 } 00:17:00.904 }, 00:17:00.904 "base_bdevs_list": [ 00:17:00.904 { 00:17:00.904 "name": "spare", 00:17:00.904 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:17:00.904 "is_configured": true, 00:17:00.904 "data_offset": 0, 00:17:00.904 "data_size": 65536 00:17:00.904 }, 00:17:00.904 { 00:17:00.904 "name": "BaseBdev2", 00:17:00.904 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:17:00.904 "is_configured": true, 00:17:00.904 "data_offset": 0, 00:17:00.904 "data_size": 65536 00:17:00.904 }, 00:17:00.904 { 00:17:00.904 "name": "BaseBdev3", 00:17:00.904 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:17:00.904 "is_configured": true, 00:17:00.904 "data_offset": 0, 00:17:00.904 "data_size": 65536 00:17:00.904 }, 00:17:00.904 { 00:17:00.904 "name": "BaseBdev4", 00:17:00.904 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:17:00.904 "is_configured": true, 00:17:00.904 "data_offset": 0, 00:17:00.904 "data_size": 65536 00:17:00.904 } 00:17:00.904 ] 00:17:00.904 }' 00:17:00.904 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.904 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.904 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.904 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.904 07:48:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.285 "name": "raid_bdev1", 00:17:02.285 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:17:02.285 "strip_size_kb": 64, 00:17:02.285 "state": "online", 00:17:02.285 "raid_level": "raid5f", 00:17:02.285 "superblock": false, 00:17:02.285 "num_base_bdevs": 4, 00:17:02.285 "num_base_bdevs_discovered": 4, 00:17:02.285 "num_base_bdevs_operational": 4, 00:17:02.285 "process": { 00:17:02.285 "type": "rebuild", 00:17:02.285 "target": "spare", 00:17:02.285 "progress": { 00:17:02.285 "blocks": 195840, 00:17:02.285 "percent": 99 00:17:02.285 } 00:17:02.285 }, 00:17:02.285 "base_bdevs_list": [ 00:17:02.285 { 00:17:02.285 "name": "spare", 00:17:02.285 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:17:02.285 "is_configured": true, 00:17:02.285 "data_offset": 0, 00:17:02.285 "data_size": 65536 00:17:02.285 }, 00:17:02.285 { 00:17:02.285 "name": "BaseBdev2", 00:17:02.285 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:17:02.285 "is_configured": true, 00:17:02.285 "data_offset": 0, 00:17:02.285 "data_size": 65536 00:17:02.285 }, 00:17:02.285 { 00:17:02.285 "name": "BaseBdev3", 00:17:02.285 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:17:02.285 "is_configured": true, 00:17:02.285 "data_offset": 0, 00:17:02.285 "data_size": 65536 00:17:02.285 }, 00:17:02.285 { 00:17:02.285 "name": "BaseBdev4", 00:17:02.285 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:17:02.285 "is_configured": true, 00:17:02.285 "data_offset": 0, 00:17:02.285 "data_size": 65536 00:17:02.285 } 00:17:02.285 ] 00:17:02.285 }' 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.285 [2024-11-29 07:48:51.907053] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:02.285 [2024-11-29 07:48:51.907133] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:02.285 [2024-11-29 07:48:51.907191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.285 07:48:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.225 07:48:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.225 07:48:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.225 07:48:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.225 07:48:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.225 07:48:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.225 07:48:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.225 07:48:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.225 07:48:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.225 07:48:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.226 07:48:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.226 "name": "raid_bdev1", 00:17:03.226 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:17:03.226 "strip_size_kb": 64, 00:17:03.226 "state": "online", 00:17:03.226 "raid_level": "raid5f", 00:17:03.226 "superblock": false, 00:17:03.226 "num_base_bdevs": 4, 00:17:03.226 "num_base_bdevs_discovered": 4, 00:17:03.226 "num_base_bdevs_operational": 4, 00:17:03.226 "base_bdevs_list": [ 00:17:03.226 { 00:17:03.226 "name": "spare", 00:17:03.226 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:17:03.226 "is_configured": true, 00:17:03.226 "data_offset": 0, 00:17:03.226 "data_size": 65536 00:17:03.226 }, 00:17:03.226 { 00:17:03.226 "name": "BaseBdev2", 00:17:03.226 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:17:03.226 "is_configured": true, 00:17:03.226 "data_offset": 0, 00:17:03.226 "data_size": 65536 00:17:03.226 }, 00:17:03.226 { 00:17:03.226 "name": "BaseBdev3", 00:17:03.226 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:17:03.226 "is_configured": true, 00:17:03.226 "data_offset": 0, 00:17:03.226 "data_size": 65536 00:17:03.226 }, 00:17:03.226 { 00:17:03.226 "name": "BaseBdev4", 00:17:03.226 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:17:03.226 "is_configured": true, 00:17:03.226 "data_offset": 0, 00:17:03.226 "data_size": 65536 00:17:03.226 } 00:17:03.226 ] 00:17:03.226 }' 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.226 "name": "raid_bdev1", 00:17:03.226 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:17:03.226 "strip_size_kb": 64, 00:17:03.226 "state": "online", 00:17:03.226 "raid_level": "raid5f", 00:17:03.226 "superblock": false, 00:17:03.226 "num_base_bdevs": 4, 00:17:03.226 "num_base_bdevs_discovered": 4, 00:17:03.226 "num_base_bdevs_operational": 4, 00:17:03.226 "base_bdevs_list": [ 00:17:03.226 { 00:17:03.226 "name": "spare", 00:17:03.226 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:17:03.226 "is_configured": true, 00:17:03.226 "data_offset": 0, 00:17:03.226 "data_size": 65536 00:17:03.226 }, 00:17:03.226 { 00:17:03.226 "name": "BaseBdev2", 00:17:03.226 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:17:03.226 "is_configured": true, 00:17:03.226 "data_offset": 0, 00:17:03.226 "data_size": 65536 00:17:03.226 }, 00:17:03.226 { 00:17:03.226 "name": "BaseBdev3", 00:17:03.226 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:17:03.226 "is_configured": true, 00:17:03.226 "data_offset": 0, 00:17:03.226 "data_size": 65536 00:17:03.226 }, 00:17:03.226 { 00:17:03.226 "name": "BaseBdev4", 00:17:03.226 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:17:03.226 "is_configured": true, 00:17:03.226 "data_offset": 0, 00:17:03.226 "data_size": 65536 00:17:03.226 } 00:17:03.226 ] 00:17:03.226 }' 00:17:03.226 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.486 "name": "raid_bdev1", 00:17:03.486 "uuid": "0d6549f3-5494-480d-8cea-965737ae8eb8", 00:17:03.486 "strip_size_kb": 64, 00:17:03.486 "state": "online", 00:17:03.486 "raid_level": "raid5f", 00:17:03.486 "superblock": false, 00:17:03.486 "num_base_bdevs": 4, 00:17:03.486 "num_base_bdevs_discovered": 4, 00:17:03.486 "num_base_bdevs_operational": 4, 00:17:03.486 "base_bdevs_list": [ 00:17:03.486 { 00:17:03.486 "name": "spare", 00:17:03.486 "uuid": "e238e0ac-cbb3-5a36-a68b-1bc9c784b23d", 00:17:03.486 "is_configured": true, 00:17:03.486 "data_offset": 0, 00:17:03.486 "data_size": 65536 00:17:03.486 }, 00:17:03.486 { 00:17:03.486 "name": "BaseBdev2", 00:17:03.486 "uuid": "0c9e241d-0f51-5d72-acbc-6e0cc88d23a2", 00:17:03.486 "is_configured": true, 00:17:03.486 "data_offset": 0, 00:17:03.486 "data_size": 65536 00:17:03.486 }, 00:17:03.486 { 00:17:03.486 "name": "BaseBdev3", 00:17:03.486 "uuid": "6a127bc3-b10c-5309-a36f-0c3770af170b", 00:17:03.486 "is_configured": true, 00:17:03.486 "data_offset": 0, 00:17:03.486 "data_size": 65536 00:17:03.486 }, 00:17:03.486 { 00:17:03.486 "name": "BaseBdev4", 00:17:03.486 "uuid": "19abf78f-8bbb-506f-be0b-caebb4ab7bb0", 00:17:03.486 "is_configured": true, 00:17:03.486 "data_offset": 0, 00:17:03.486 "data_size": 65536 00:17:03.486 } 00:17:03.486 ] 00:17:03.486 }' 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.486 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.056 [2024-11-29 07:48:53.698798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.056 [2024-11-29 07:48:53.698834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.056 [2024-11-29 07:48:53.698928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.056 [2024-11-29 07:48:53.699022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.056 [2024-11-29 07:48:53.699039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:04.056 /dev/nbd0 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:04.056 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.057 1+0 records in 00:17:04.057 1+0 records out 00:17:04.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248654 s, 16.5 MB/s 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:04.057 07:48:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:04.316 /dev/nbd1 00:17:04.316 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:04.316 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:04.316 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.317 1+0 records in 00:17:04.317 1+0 records out 00:17:04.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422881 s, 9.7 MB/s 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:04.317 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:04.576 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:04.576 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.576 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:04.576 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:04.576 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:04.576 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.576 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:04.836 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:04.836 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:04.836 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:04.836 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.836 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.836 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:04.836 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:04.836 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.836 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.836 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84266 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84266 ']' 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84266 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84266 00:17:05.096 killing process with pid 84266 00:17:05.096 Received shutdown signal, test time was about 60.000000 seconds 00:17:05.096 00:17:05.096 Latency(us) 00:17:05.096 [2024-11-29T07:48:55.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.096 [2024-11-29T07:48:55.041Z] =================================================================================================================== 00:17:05.096 [2024-11-29T07:48:55.041Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84266' 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84266 00:17:05.096 [2024-11-29 07:48:54.828446] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.096 07:48:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84266 00:17:05.355 [2024-11-29 07:48:55.283599] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:06.735 00:17:06.735 real 0m19.798s 00:17:06.735 user 0m23.662s 00:17:06.735 sys 0m2.142s 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.735 ************************************ 00:17:06.735 END TEST raid5f_rebuild_test 00:17:06.735 ************************************ 00:17:06.735 07:48:56 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:06.735 07:48:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:06.735 07:48:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.735 07:48:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.735 ************************************ 00:17:06.735 START TEST raid5f_rebuild_test_sb 00:17:06.735 ************************************ 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84789 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84789 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84789 ']' 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.735 07:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.735 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:06.735 Zero copy mechanism will not be used. 00:17:06.735 [2024-11-29 07:48:56.500409] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:06.735 [2024-11-29 07:48:56.500538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84789 ] 00:17:06.735 [2024-11-29 07:48:56.670905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.995 [2024-11-29 07:48:56.776729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.255 [2024-11-29 07:48:56.974360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.255 [2024-11-29 07:48:56.974464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.515 BaseBdev1_malloc 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.515 [2024-11-29 07:48:57.354318] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:07.515 [2024-11-29 07:48:57.354379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.515 [2024-11-29 07:48:57.354418] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:07.515 [2024-11-29 07:48:57.354430] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.515 [2024-11-29 07:48:57.356411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.515 [2024-11-29 07:48:57.356452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:07.515 BaseBdev1 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.515 BaseBdev2_malloc 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.515 [2024-11-29 07:48:57.406791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:07.515 [2024-11-29 07:48:57.406847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.515 [2024-11-29 07:48:57.406885] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:07.515 [2024-11-29 07:48:57.406895] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.515 [2024-11-29 07:48:57.408933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.515 [2024-11-29 07:48:57.408973] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:07.515 BaseBdev2 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.515 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 BaseBdev3_malloc 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 [2024-11-29 07:48:57.489554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:07.776 [2024-11-29 07:48:57.489604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.776 [2024-11-29 07:48:57.489643] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:07.776 [2024-11-29 07:48:57.489654] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.776 [2024-11-29 07:48:57.491652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.776 [2024-11-29 07:48:57.491693] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:07.776 BaseBdev3 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 BaseBdev4_malloc 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 [2024-11-29 07:48:57.540705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:07.776 [2024-11-29 07:48:57.540759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.776 [2024-11-29 07:48:57.540780] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:07.776 [2024-11-29 07:48:57.540790] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.776 [2024-11-29 07:48:57.542815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.776 [2024-11-29 07:48:57.542890] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:07.776 BaseBdev4 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 spare_malloc 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 spare_delay 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 [2024-11-29 07:48:57.607810] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:07.776 [2024-11-29 07:48:57.607857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.776 [2024-11-29 07:48:57.607897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:07.776 [2024-11-29 07:48:57.607907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.776 [2024-11-29 07:48:57.609894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.776 [2024-11-29 07:48:57.609932] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:07.776 spare 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 [2024-11-29 07:48:57.619852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.776 [2024-11-29 07:48:57.621627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.776 [2024-11-29 07:48:57.621685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.776 [2024-11-29 07:48:57.621731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:07.776 [2024-11-29 07:48:57.621906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:07.776 [2024-11-29 07:48:57.621919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:07.776 [2024-11-29 07:48:57.622172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:07.776 [2024-11-29 07:48:57.629053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:07.776 [2024-11-29 07:48:57.629075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:07.776 [2024-11-29 07:48:57.629310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.776 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.776 "name": "raid_bdev1", 00:17:07.776 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:07.776 "strip_size_kb": 64, 00:17:07.776 "state": "online", 00:17:07.776 "raid_level": "raid5f", 00:17:07.777 "superblock": true, 00:17:07.777 "num_base_bdevs": 4, 00:17:07.777 "num_base_bdevs_discovered": 4, 00:17:07.777 "num_base_bdevs_operational": 4, 00:17:07.777 "base_bdevs_list": [ 00:17:07.777 { 00:17:07.777 "name": "BaseBdev1", 00:17:07.777 "uuid": "ba8935f7-bc56-5d63-ba1d-fe7ecd584947", 00:17:07.777 "is_configured": true, 00:17:07.777 "data_offset": 2048, 00:17:07.777 "data_size": 63488 00:17:07.777 }, 00:17:07.777 { 00:17:07.777 "name": "BaseBdev2", 00:17:07.777 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:07.777 "is_configured": true, 00:17:07.777 "data_offset": 2048, 00:17:07.777 "data_size": 63488 00:17:07.777 }, 00:17:07.777 { 00:17:07.777 "name": "BaseBdev3", 00:17:07.777 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:07.777 "is_configured": true, 00:17:07.777 "data_offset": 2048, 00:17:07.777 "data_size": 63488 00:17:07.777 }, 00:17:07.777 { 00:17:07.777 "name": "BaseBdev4", 00:17:07.777 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:07.777 "is_configured": true, 00:17:07.777 "data_offset": 2048, 00:17:07.777 "data_size": 63488 00:17:07.777 } 00:17:07.777 ] 00:17:07.777 }' 00:17:07.777 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.777 07:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.346 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.346 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.346 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.346 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:08.346 [2024-11-29 07:48:58.105002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.346 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.346 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:08.346 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.346 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.346 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.346 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:08.346 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:08.347 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:08.606 [2024-11-29 07:48:58.368393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:08.607 /dev/nbd0 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:08.607 1+0 records in 00:17:08.607 1+0 records out 00:17:08.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423598 s, 9.7 MB/s 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:08.607 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:09.177 496+0 records in 00:17:09.177 496+0 records out 00:17:09.177 97517568 bytes (98 MB, 93 MiB) copied, 0.452432 s, 216 MB/s 00:17:09.177 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:09.177 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:09.177 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:09.177 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:09.177 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:09.177 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:09.177 07:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:09.177 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:09.177 [2024-11-29 07:48:59.104179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.177 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:09.177 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:09.177 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.177 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.177 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:09.177 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:09.177 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.177 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:09.177 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.177 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.438 [2024-11-29 07:48:59.125922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.438 "name": "raid_bdev1", 00:17:09.438 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:09.438 "strip_size_kb": 64, 00:17:09.438 "state": "online", 00:17:09.438 "raid_level": "raid5f", 00:17:09.438 "superblock": true, 00:17:09.438 "num_base_bdevs": 4, 00:17:09.438 "num_base_bdevs_discovered": 3, 00:17:09.438 "num_base_bdevs_operational": 3, 00:17:09.438 "base_bdevs_list": [ 00:17:09.438 { 00:17:09.438 "name": null, 00:17:09.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.438 "is_configured": false, 00:17:09.438 "data_offset": 0, 00:17:09.438 "data_size": 63488 00:17:09.438 }, 00:17:09.438 { 00:17:09.438 "name": "BaseBdev2", 00:17:09.438 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:09.438 "is_configured": true, 00:17:09.438 "data_offset": 2048, 00:17:09.438 "data_size": 63488 00:17:09.438 }, 00:17:09.438 { 00:17:09.438 "name": "BaseBdev3", 00:17:09.438 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:09.438 "is_configured": true, 00:17:09.438 "data_offset": 2048, 00:17:09.438 "data_size": 63488 00:17:09.438 }, 00:17:09.438 { 00:17:09.438 "name": "BaseBdev4", 00:17:09.438 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:09.438 "is_configured": true, 00:17:09.438 "data_offset": 2048, 00:17:09.438 "data_size": 63488 00:17:09.438 } 00:17:09.438 ] 00:17:09.438 }' 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.438 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.698 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.698 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.698 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.698 [2024-11-29 07:48:59.549217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.698 [2024-11-29 07:48:59.565181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:09.698 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.698 07:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:09.698 [2024-11-29 07:48:59.574505] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.639 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.639 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.639 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.639 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.639 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.639 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.639 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.639 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.639 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.899 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.899 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.899 "name": "raid_bdev1", 00:17:10.899 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:10.899 "strip_size_kb": 64, 00:17:10.899 "state": "online", 00:17:10.899 "raid_level": "raid5f", 00:17:10.899 "superblock": true, 00:17:10.899 "num_base_bdevs": 4, 00:17:10.899 "num_base_bdevs_discovered": 4, 00:17:10.899 "num_base_bdevs_operational": 4, 00:17:10.899 "process": { 00:17:10.899 "type": "rebuild", 00:17:10.899 "target": "spare", 00:17:10.899 "progress": { 00:17:10.899 "blocks": 19200, 00:17:10.899 "percent": 10 00:17:10.899 } 00:17:10.899 }, 00:17:10.899 "base_bdevs_list": [ 00:17:10.899 { 00:17:10.899 "name": "spare", 00:17:10.899 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:10.900 "is_configured": true, 00:17:10.900 "data_offset": 2048, 00:17:10.900 "data_size": 63488 00:17:10.900 }, 00:17:10.900 { 00:17:10.900 "name": "BaseBdev2", 00:17:10.900 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:10.900 "is_configured": true, 00:17:10.900 "data_offset": 2048, 00:17:10.900 "data_size": 63488 00:17:10.900 }, 00:17:10.900 { 00:17:10.900 "name": "BaseBdev3", 00:17:10.900 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:10.900 "is_configured": true, 00:17:10.900 "data_offset": 2048, 00:17:10.900 "data_size": 63488 00:17:10.900 }, 00:17:10.900 { 00:17:10.900 "name": "BaseBdev4", 00:17:10.900 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:10.900 "is_configured": true, 00:17:10.900 "data_offset": 2048, 00:17:10.900 "data_size": 63488 00:17:10.900 } 00:17:10.900 ] 00:17:10.900 }' 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.900 [2024-11-29 07:49:00.721285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.900 [2024-11-29 07:49:00.780523] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:10.900 [2024-11-29 07:49:00.780586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.900 [2024-11-29 07:49:00.780604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.900 [2024-11-29 07:49:00.780613] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.900 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.159 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.159 "name": "raid_bdev1", 00:17:11.159 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:11.159 "strip_size_kb": 64, 00:17:11.159 "state": "online", 00:17:11.159 "raid_level": "raid5f", 00:17:11.159 "superblock": true, 00:17:11.159 "num_base_bdevs": 4, 00:17:11.159 "num_base_bdevs_discovered": 3, 00:17:11.159 "num_base_bdevs_operational": 3, 00:17:11.159 "base_bdevs_list": [ 00:17:11.159 { 00:17:11.159 "name": null, 00:17:11.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.159 "is_configured": false, 00:17:11.159 "data_offset": 0, 00:17:11.159 "data_size": 63488 00:17:11.159 }, 00:17:11.159 { 00:17:11.159 "name": "BaseBdev2", 00:17:11.159 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:11.159 "is_configured": true, 00:17:11.159 "data_offset": 2048, 00:17:11.159 "data_size": 63488 00:17:11.159 }, 00:17:11.159 { 00:17:11.159 "name": "BaseBdev3", 00:17:11.159 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:11.159 "is_configured": true, 00:17:11.160 "data_offset": 2048, 00:17:11.160 "data_size": 63488 00:17:11.160 }, 00:17:11.160 { 00:17:11.160 "name": "BaseBdev4", 00:17:11.160 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:11.160 "is_configured": true, 00:17:11.160 "data_offset": 2048, 00:17:11.160 "data_size": 63488 00:17:11.160 } 00:17:11.160 ] 00:17:11.160 }' 00:17:11.160 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.160 07:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.420 "name": "raid_bdev1", 00:17:11.420 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:11.420 "strip_size_kb": 64, 00:17:11.420 "state": "online", 00:17:11.420 "raid_level": "raid5f", 00:17:11.420 "superblock": true, 00:17:11.420 "num_base_bdevs": 4, 00:17:11.420 "num_base_bdevs_discovered": 3, 00:17:11.420 "num_base_bdevs_operational": 3, 00:17:11.420 "base_bdevs_list": [ 00:17:11.420 { 00:17:11.420 "name": null, 00:17:11.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.420 "is_configured": false, 00:17:11.420 "data_offset": 0, 00:17:11.420 "data_size": 63488 00:17:11.420 }, 00:17:11.420 { 00:17:11.420 "name": "BaseBdev2", 00:17:11.420 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:11.420 "is_configured": true, 00:17:11.420 "data_offset": 2048, 00:17:11.420 "data_size": 63488 00:17:11.420 }, 00:17:11.420 { 00:17:11.420 "name": "BaseBdev3", 00:17:11.420 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:11.420 "is_configured": true, 00:17:11.420 "data_offset": 2048, 00:17:11.420 "data_size": 63488 00:17:11.420 }, 00:17:11.420 { 00:17:11.420 "name": "BaseBdev4", 00:17:11.420 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:11.420 "is_configured": true, 00:17:11.420 "data_offset": 2048, 00:17:11.420 "data_size": 63488 00:17:11.420 } 00:17:11.420 ] 00:17:11.420 }' 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.420 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.680 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.680 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:11.680 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.680 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.680 [2024-11-29 07:49:01.376053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.680 [2024-11-29 07:49:01.391393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:11.680 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.680 07:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:11.680 [2024-11-29 07:49:01.400799] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.619 "name": "raid_bdev1", 00:17:12.619 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:12.619 "strip_size_kb": 64, 00:17:12.619 "state": "online", 00:17:12.619 "raid_level": "raid5f", 00:17:12.619 "superblock": true, 00:17:12.619 "num_base_bdevs": 4, 00:17:12.619 "num_base_bdevs_discovered": 4, 00:17:12.619 "num_base_bdevs_operational": 4, 00:17:12.619 "process": { 00:17:12.619 "type": "rebuild", 00:17:12.619 "target": "spare", 00:17:12.619 "progress": { 00:17:12.619 "blocks": 19200, 00:17:12.619 "percent": 10 00:17:12.619 } 00:17:12.619 }, 00:17:12.619 "base_bdevs_list": [ 00:17:12.619 { 00:17:12.619 "name": "spare", 00:17:12.619 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:12.619 "is_configured": true, 00:17:12.619 "data_offset": 2048, 00:17:12.619 "data_size": 63488 00:17:12.619 }, 00:17:12.619 { 00:17:12.619 "name": "BaseBdev2", 00:17:12.619 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:12.619 "is_configured": true, 00:17:12.619 "data_offset": 2048, 00:17:12.619 "data_size": 63488 00:17:12.619 }, 00:17:12.619 { 00:17:12.619 "name": "BaseBdev3", 00:17:12.619 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:12.619 "is_configured": true, 00:17:12.619 "data_offset": 2048, 00:17:12.619 "data_size": 63488 00:17:12.619 }, 00:17:12.619 { 00:17:12.619 "name": "BaseBdev4", 00:17:12.619 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:12.619 "is_configured": true, 00:17:12.619 "data_offset": 2048, 00:17:12.619 "data_size": 63488 00:17:12.619 } 00:17:12.619 ] 00:17:12.619 }' 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.619 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:12.620 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=621 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.620 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.882 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.882 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.882 "name": "raid_bdev1", 00:17:12.882 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:12.882 "strip_size_kb": 64, 00:17:12.882 "state": "online", 00:17:12.882 "raid_level": "raid5f", 00:17:12.882 "superblock": true, 00:17:12.882 "num_base_bdevs": 4, 00:17:12.882 "num_base_bdevs_discovered": 4, 00:17:12.882 "num_base_bdevs_operational": 4, 00:17:12.882 "process": { 00:17:12.882 "type": "rebuild", 00:17:12.882 "target": "spare", 00:17:12.882 "progress": { 00:17:12.882 "blocks": 21120, 00:17:12.882 "percent": 11 00:17:12.882 } 00:17:12.882 }, 00:17:12.882 "base_bdevs_list": [ 00:17:12.882 { 00:17:12.882 "name": "spare", 00:17:12.882 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:12.882 "is_configured": true, 00:17:12.882 "data_offset": 2048, 00:17:12.882 "data_size": 63488 00:17:12.882 }, 00:17:12.882 { 00:17:12.882 "name": "BaseBdev2", 00:17:12.882 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:12.882 "is_configured": true, 00:17:12.882 "data_offset": 2048, 00:17:12.882 "data_size": 63488 00:17:12.882 }, 00:17:12.882 { 00:17:12.882 "name": "BaseBdev3", 00:17:12.882 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:12.882 "is_configured": true, 00:17:12.882 "data_offset": 2048, 00:17:12.882 "data_size": 63488 00:17:12.882 }, 00:17:12.882 { 00:17:12.882 "name": "BaseBdev4", 00:17:12.882 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:12.882 "is_configured": true, 00:17:12.882 "data_offset": 2048, 00:17:12.882 "data_size": 63488 00:17:12.882 } 00:17:12.882 ] 00:17:12.882 }' 00:17:12.882 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.882 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.882 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.882 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.882 07:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.846 "name": "raid_bdev1", 00:17:13.846 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:13.846 "strip_size_kb": 64, 00:17:13.846 "state": "online", 00:17:13.846 "raid_level": "raid5f", 00:17:13.846 "superblock": true, 00:17:13.846 "num_base_bdevs": 4, 00:17:13.846 "num_base_bdevs_discovered": 4, 00:17:13.846 "num_base_bdevs_operational": 4, 00:17:13.846 "process": { 00:17:13.846 "type": "rebuild", 00:17:13.846 "target": "spare", 00:17:13.846 "progress": { 00:17:13.846 "blocks": 44160, 00:17:13.846 "percent": 23 00:17:13.846 } 00:17:13.846 }, 00:17:13.846 "base_bdevs_list": [ 00:17:13.846 { 00:17:13.846 "name": "spare", 00:17:13.846 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:13.846 "is_configured": true, 00:17:13.846 "data_offset": 2048, 00:17:13.846 "data_size": 63488 00:17:13.846 }, 00:17:13.846 { 00:17:13.846 "name": "BaseBdev2", 00:17:13.846 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:13.846 "is_configured": true, 00:17:13.846 "data_offset": 2048, 00:17:13.846 "data_size": 63488 00:17:13.846 }, 00:17:13.846 { 00:17:13.846 "name": "BaseBdev3", 00:17:13.846 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:13.846 "is_configured": true, 00:17:13.846 "data_offset": 2048, 00:17:13.846 "data_size": 63488 00:17:13.846 }, 00:17:13.846 { 00:17:13.846 "name": "BaseBdev4", 00:17:13.846 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:13.846 "is_configured": true, 00:17:13.846 "data_offset": 2048, 00:17:13.846 "data_size": 63488 00:17:13.846 } 00:17:13.846 ] 00:17:13.846 }' 00:17:13.846 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.106 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.106 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.106 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.106 07:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.046 "name": "raid_bdev1", 00:17:15.046 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:15.046 "strip_size_kb": 64, 00:17:15.046 "state": "online", 00:17:15.046 "raid_level": "raid5f", 00:17:15.046 "superblock": true, 00:17:15.046 "num_base_bdevs": 4, 00:17:15.046 "num_base_bdevs_discovered": 4, 00:17:15.046 "num_base_bdevs_operational": 4, 00:17:15.046 "process": { 00:17:15.046 "type": "rebuild", 00:17:15.046 "target": "spare", 00:17:15.046 "progress": { 00:17:15.046 "blocks": 65280, 00:17:15.046 "percent": 34 00:17:15.046 } 00:17:15.046 }, 00:17:15.046 "base_bdevs_list": [ 00:17:15.046 { 00:17:15.046 "name": "spare", 00:17:15.046 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:15.046 "is_configured": true, 00:17:15.046 "data_offset": 2048, 00:17:15.046 "data_size": 63488 00:17:15.046 }, 00:17:15.046 { 00:17:15.046 "name": "BaseBdev2", 00:17:15.046 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:15.046 "is_configured": true, 00:17:15.046 "data_offset": 2048, 00:17:15.046 "data_size": 63488 00:17:15.046 }, 00:17:15.046 { 00:17:15.046 "name": "BaseBdev3", 00:17:15.046 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:15.046 "is_configured": true, 00:17:15.046 "data_offset": 2048, 00:17:15.046 "data_size": 63488 00:17:15.046 }, 00:17:15.046 { 00:17:15.046 "name": "BaseBdev4", 00:17:15.046 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:15.046 "is_configured": true, 00:17:15.046 "data_offset": 2048, 00:17:15.046 "data_size": 63488 00:17:15.046 } 00:17:15.046 ] 00:17:15.046 }' 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.046 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.304 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.304 07:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:16.243 07:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.243 "name": "raid_bdev1", 00:17:16.243 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:16.243 "strip_size_kb": 64, 00:17:16.243 "state": "online", 00:17:16.243 "raid_level": "raid5f", 00:17:16.243 "superblock": true, 00:17:16.243 "num_base_bdevs": 4, 00:17:16.243 "num_base_bdevs_discovered": 4, 00:17:16.243 "num_base_bdevs_operational": 4, 00:17:16.243 "process": { 00:17:16.243 "type": "rebuild", 00:17:16.243 "target": "spare", 00:17:16.243 "progress": { 00:17:16.243 "blocks": 86400, 00:17:16.243 "percent": 45 00:17:16.243 } 00:17:16.243 }, 00:17:16.243 "base_bdevs_list": [ 00:17:16.243 { 00:17:16.243 "name": "spare", 00:17:16.243 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:16.243 "is_configured": true, 00:17:16.243 "data_offset": 2048, 00:17:16.243 "data_size": 63488 00:17:16.243 }, 00:17:16.243 { 00:17:16.243 "name": "BaseBdev2", 00:17:16.243 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:16.243 "is_configured": true, 00:17:16.243 "data_offset": 2048, 00:17:16.243 "data_size": 63488 00:17:16.243 }, 00:17:16.243 { 00:17:16.243 "name": "BaseBdev3", 00:17:16.243 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:16.243 "is_configured": true, 00:17:16.243 "data_offset": 2048, 00:17:16.243 "data_size": 63488 00:17:16.243 }, 00:17:16.243 { 00:17:16.243 "name": "BaseBdev4", 00:17:16.243 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:16.243 "is_configured": true, 00:17:16.243 "data_offset": 2048, 00:17:16.243 "data_size": 63488 00:17:16.243 } 00:17:16.243 ] 00:17:16.243 }' 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.243 07:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.628 "name": "raid_bdev1", 00:17:17.628 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:17.628 "strip_size_kb": 64, 00:17:17.628 "state": "online", 00:17:17.628 "raid_level": "raid5f", 00:17:17.628 "superblock": true, 00:17:17.628 "num_base_bdevs": 4, 00:17:17.628 "num_base_bdevs_discovered": 4, 00:17:17.628 "num_base_bdevs_operational": 4, 00:17:17.628 "process": { 00:17:17.628 "type": "rebuild", 00:17:17.628 "target": "spare", 00:17:17.628 "progress": { 00:17:17.628 "blocks": 109440, 00:17:17.628 "percent": 57 00:17:17.628 } 00:17:17.628 }, 00:17:17.628 "base_bdevs_list": [ 00:17:17.628 { 00:17:17.628 "name": "spare", 00:17:17.628 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:17.628 "is_configured": true, 00:17:17.628 "data_offset": 2048, 00:17:17.628 "data_size": 63488 00:17:17.628 }, 00:17:17.628 { 00:17:17.628 "name": "BaseBdev2", 00:17:17.628 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:17.628 "is_configured": true, 00:17:17.628 "data_offset": 2048, 00:17:17.628 "data_size": 63488 00:17:17.628 }, 00:17:17.628 { 00:17:17.628 "name": "BaseBdev3", 00:17:17.628 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:17.628 "is_configured": true, 00:17:17.628 "data_offset": 2048, 00:17:17.628 "data_size": 63488 00:17:17.628 }, 00:17:17.628 { 00:17:17.628 "name": "BaseBdev4", 00:17:17.628 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:17.628 "is_configured": true, 00:17:17.628 "data_offset": 2048, 00:17:17.628 "data_size": 63488 00:17:17.628 } 00:17:17.628 ] 00:17:17.628 }' 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.628 07:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.572 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.572 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.572 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.572 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.572 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.572 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.572 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.572 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.572 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.572 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.572 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.572 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.572 "name": "raid_bdev1", 00:17:18.572 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:18.572 "strip_size_kb": 64, 00:17:18.572 "state": "online", 00:17:18.572 "raid_level": "raid5f", 00:17:18.572 "superblock": true, 00:17:18.572 "num_base_bdevs": 4, 00:17:18.572 "num_base_bdevs_discovered": 4, 00:17:18.572 "num_base_bdevs_operational": 4, 00:17:18.572 "process": { 00:17:18.572 "type": "rebuild", 00:17:18.572 "target": "spare", 00:17:18.572 "progress": { 00:17:18.572 "blocks": 130560, 00:17:18.572 "percent": 68 00:17:18.572 } 00:17:18.572 }, 00:17:18.572 "base_bdevs_list": [ 00:17:18.572 { 00:17:18.572 "name": "spare", 00:17:18.572 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:18.572 "is_configured": true, 00:17:18.572 "data_offset": 2048, 00:17:18.572 "data_size": 63488 00:17:18.572 }, 00:17:18.572 { 00:17:18.573 "name": "BaseBdev2", 00:17:18.573 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:18.573 "is_configured": true, 00:17:18.573 "data_offset": 2048, 00:17:18.573 "data_size": 63488 00:17:18.573 }, 00:17:18.573 { 00:17:18.573 "name": "BaseBdev3", 00:17:18.573 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:18.573 "is_configured": true, 00:17:18.573 "data_offset": 2048, 00:17:18.573 "data_size": 63488 00:17:18.573 }, 00:17:18.573 { 00:17:18.573 "name": "BaseBdev4", 00:17:18.573 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:18.573 "is_configured": true, 00:17:18.573 "data_offset": 2048, 00:17:18.573 "data_size": 63488 00:17:18.573 } 00:17:18.573 ] 00:17:18.573 }' 00:17:18.573 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.573 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.573 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.573 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.573 07:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.512 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.512 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.512 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.512 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.512 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.512 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.512 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.512 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.512 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.512 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.772 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.772 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.772 "name": "raid_bdev1", 00:17:19.772 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:19.772 "strip_size_kb": 64, 00:17:19.772 "state": "online", 00:17:19.772 "raid_level": "raid5f", 00:17:19.772 "superblock": true, 00:17:19.772 "num_base_bdevs": 4, 00:17:19.772 "num_base_bdevs_discovered": 4, 00:17:19.772 "num_base_bdevs_operational": 4, 00:17:19.772 "process": { 00:17:19.772 "type": "rebuild", 00:17:19.772 "target": "spare", 00:17:19.772 "progress": { 00:17:19.772 "blocks": 153600, 00:17:19.772 "percent": 80 00:17:19.772 } 00:17:19.772 }, 00:17:19.772 "base_bdevs_list": [ 00:17:19.772 { 00:17:19.772 "name": "spare", 00:17:19.772 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:19.772 "is_configured": true, 00:17:19.772 "data_offset": 2048, 00:17:19.772 "data_size": 63488 00:17:19.772 }, 00:17:19.772 { 00:17:19.772 "name": "BaseBdev2", 00:17:19.772 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:19.772 "is_configured": true, 00:17:19.772 "data_offset": 2048, 00:17:19.772 "data_size": 63488 00:17:19.772 }, 00:17:19.772 { 00:17:19.772 "name": "BaseBdev3", 00:17:19.772 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:19.772 "is_configured": true, 00:17:19.772 "data_offset": 2048, 00:17:19.772 "data_size": 63488 00:17:19.772 }, 00:17:19.772 { 00:17:19.772 "name": "BaseBdev4", 00:17:19.772 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:19.772 "is_configured": true, 00:17:19.772 "data_offset": 2048, 00:17:19.772 "data_size": 63488 00:17:19.772 } 00:17:19.772 ] 00:17:19.772 }' 00:17:19.772 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.772 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.772 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.772 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.772 07:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.711 "name": "raid_bdev1", 00:17:20.711 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:20.711 "strip_size_kb": 64, 00:17:20.711 "state": "online", 00:17:20.711 "raid_level": "raid5f", 00:17:20.711 "superblock": true, 00:17:20.711 "num_base_bdevs": 4, 00:17:20.711 "num_base_bdevs_discovered": 4, 00:17:20.711 "num_base_bdevs_operational": 4, 00:17:20.711 "process": { 00:17:20.711 "type": "rebuild", 00:17:20.711 "target": "spare", 00:17:20.711 "progress": { 00:17:20.711 "blocks": 174720, 00:17:20.711 "percent": 91 00:17:20.711 } 00:17:20.711 }, 00:17:20.711 "base_bdevs_list": [ 00:17:20.711 { 00:17:20.711 "name": "spare", 00:17:20.711 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:20.711 "is_configured": true, 00:17:20.711 "data_offset": 2048, 00:17:20.711 "data_size": 63488 00:17:20.711 }, 00:17:20.711 { 00:17:20.711 "name": "BaseBdev2", 00:17:20.711 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:20.711 "is_configured": true, 00:17:20.711 "data_offset": 2048, 00:17:20.711 "data_size": 63488 00:17:20.711 }, 00:17:20.711 { 00:17:20.711 "name": "BaseBdev3", 00:17:20.711 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:20.711 "is_configured": true, 00:17:20.711 "data_offset": 2048, 00:17:20.711 "data_size": 63488 00:17:20.711 }, 00:17:20.711 { 00:17:20.711 "name": "BaseBdev4", 00:17:20.711 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:20.711 "is_configured": true, 00:17:20.711 "data_offset": 2048, 00:17:20.711 "data_size": 63488 00:17:20.711 } 00:17:20.711 ] 00:17:20.711 }' 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.711 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.973 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.973 07:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.542 [2024-11-29 07:49:11.447793] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:21.542 [2024-11-29 07:49:11.447927] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:21.542 [2024-11-29 07:49:11.448085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.802 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.802 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.802 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.802 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.802 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.802 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.802 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.802 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.802 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.802 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.802 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.062 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.062 "name": "raid_bdev1", 00:17:22.062 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:22.062 "strip_size_kb": 64, 00:17:22.062 "state": "online", 00:17:22.062 "raid_level": "raid5f", 00:17:22.062 "superblock": true, 00:17:22.062 "num_base_bdevs": 4, 00:17:22.062 "num_base_bdevs_discovered": 4, 00:17:22.062 "num_base_bdevs_operational": 4, 00:17:22.062 "base_bdevs_list": [ 00:17:22.062 { 00:17:22.062 "name": "spare", 00:17:22.062 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:22.062 "is_configured": true, 00:17:22.062 "data_offset": 2048, 00:17:22.062 "data_size": 63488 00:17:22.062 }, 00:17:22.062 { 00:17:22.062 "name": "BaseBdev2", 00:17:22.062 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:22.062 "is_configured": true, 00:17:22.062 "data_offset": 2048, 00:17:22.062 "data_size": 63488 00:17:22.062 }, 00:17:22.062 { 00:17:22.062 "name": "BaseBdev3", 00:17:22.062 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:22.062 "is_configured": true, 00:17:22.062 "data_offset": 2048, 00:17:22.062 "data_size": 63488 00:17:22.062 }, 00:17:22.062 { 00:17:22.062 "name": "BaseBdev4", 00:17:22.062 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:22.062 "is_configured": true, 00:17:22.062 "data_offset": 2048, 00:17:22.062 "data_size": 63488 00:17:22.062 } 00:17:22.062 ] 00:17:22.062 }' 00:17:22.062 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.062 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:22.062 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.062 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:22.062 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:22.062 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.062 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.063 "name": "raid_bdev1", 00:17:22.063 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:22.063 "strip_size_kb": 64, 00:17:22.063 "state": "online", 00:17:22.063 "raid_level": "raid5f", 00:17:22.063 "superblock": true, 00:17:22.063 "num_base_bdevs": 4, 00:17:22.063 "num_base_bdevs_discovered": 4, 00:17:22.063 "num_base_bdevs_operational": 4, 00:17:22.063 "base_bdevs_list": [ 00:17:22.063 { 00:17:22.063 "name": "spare", 00:17:22.063 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:22.063 "is_configured": true, 00:17:22.063 "data_offset": 2048, 00:17:22.063 "data_size": 63488 00:17:22.063 }, 00:17:22.063 { 00:17:22.063 "name": "BaseBdev2", 00:17:22.063 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:22.063 "is_configured": true, 00:17:22.063 "data_offset": 2048, 00:17:22.063 "data_size": 63488 00:17:22.063 }, 00:17:22.063 { 00:17:22.063 "name": "BaseBdev3", 00:17:22.063 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:22.063 "is_configured": true, 00:17:22.063 "data_offset": 2048, 00:17:22.063 "data_size": 63488 00:17:22.063 }, 00:17:22.063 { 00:17:22.063 "name": "BaseBdev4", 00:17:22.063 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:22.063 "is_configured": true, 00:17:22.063 "data_offset": 2048, 00:17:22.063 "data_size": 63488 00:17:22.063 } 00:17:22.063 ] 00:17:22.063 }' 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.063 07:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.323 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.323 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.323 "name": "raid_bdev1", 00:17:22.323 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:22.323 "strip_size_kb": 64, 00:17:22.323 "state": "online", 00:17:22.323 "raid_level": "raid5f", 00:17:22.323 "superblock": true, 00:17:22.323 "num_base_bdevs": 4, 00:17:22.323 "num_base_bdevs_discovered": 4, 00:17:22.323 "num_base_bdevs_operational": 4, 00:17:22.323 "base_bdevs_list": [ 00:17:22.323 { 00:17:22.323 "name": "spare", 00:17:22.323 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:22.323 "is_configured": true, 00:17:22.323 "data_offset": 2048, 00:17:22.323 "data_size": 63488 00:17:22.323 }, 00:17:22.323 { 00:17:22.323 "name": "BaseBdev2", 00:17:22.323 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:22.323 "is_configured": true, 00:17:22.323 "data_offset": 2048, 00:17:22.323 "data_size": 63488 00:17:22.323 }, 00:17:22.323 { 00:17:22.323 "name": "BaseBdev3", 00:17:22.323 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:22.323 "is_configured": true, 00:17:22.323 "data_offset": 2048, 00:17:22.323 "data_size": 63488 00:17:22.323 }, 00:17:22.323 { 00:17:22.323 "name": "BaseBdev4", 00:17:22.323 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:22.323 "is_configured": true, 00:17:22.323 "data_offset": 2048, 00:17:22.323 "data_size": 63488 00:17:22.323 } 00:17:22.323 ] 00:17:22.323 }' 00:17:22.323 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.323 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.583 [2024-11-29 07:49:12.459616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.583 [2024-11-29 07:49:12.459686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.583 [2024-11-29 07:49:12.459798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.583 [2024-11-29 07:49:12.459938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.583 [2024-11-29 07:49:12.460013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:22.583 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:22.843 /dev/nbd0 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.843 1+0 records in 00:17:22.843 1+0 records out 00:17:22.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529611 s, 7.7 MB/s 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:22.843 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:23.103 /dev/nbd1 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.103 1+0 records in 00:17:23.103 1+0 records out 00:17:23.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389278 s, 10.5 MB/s 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.103 07:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:23.363 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:23.363 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.363 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:23.363 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.363 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:23.363 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.363 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.623 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.884 [2024-11-29 07:49:13.590726] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:23.884 [2024-11-29 07:49:13.590833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.884 [2024-11-29 07:49:13.590873] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:23.884 [2024-11-29 07:49:13.590918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.884 [2024-11-29 07:49:13.593184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.884 [2024-11-29 07:49:13.593283] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:23.884 [2024-11-29 07:49:13.593427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:23.884 [2024-11-29 07:49:13.593502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.884 [2024-11-29 07:49:13.593667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.884 [2024-11-29 07:49:13.593798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.884 [2024-11-29 07:49:13.593926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:23.884 spare 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.884 [2024-11-29 07:49:13.693860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:23.884 [2024-11-29 07:49:13.693926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:23.884 [2024-11-29 07:49:13.694230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:23.884 [2024-11-29 07:49:13.701430] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:23.884 [2024-11-29 07:49:13.701483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:23.884 [2024-11-29 07:49:13.701706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.884 "name": "raid_bdev1", 00:17:23.884 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:23.884 "strip_size_kb": 64, 00:17:23.884 "state": "online", 00:17:23.884 "raid_level": "raid5f", 00:17:23.884 "superblock": true, 00:17:23.884 "num_base_bdevs": 4, 00:17:23.884 "num_base_bdevs_discovered": 4, 00:17:23.884 "num_base_bdevs_operational": 4, 00:17:23.884 "base_bdevs_list": [ 00:17:23.884 { 00:17:23.884 "name": "spare", 00:17:23.884 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:23.884 "is_configured": true, 00:17:23.884 "data_offset": 2048, 00:17:23.884 "data_size": 63488 00:17:23.884 }, 00:17:23.884 { 00:17:23.884 "name": "BaseBdev2", 00:17:23.884 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:23.884 "is_configured": true, 00:17:23.884 "data_offset": 2048, 00:17:23.884 "data_size": 63488 00:17:23.884 }, 00:17:23.884 { 00:17:23.884 "name": "BaseBdev3", 00:17:23.884 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:23.884 "is_configured": true, 00:17:23.884 "data_offset": 2048, 00:17:23.884 "data_size": 63488 00:17:23.884 }, 00:17:23.884 { 00:17:23.884 "name": "BaseBdev4", 00:17:23.884 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:23.884 "is_configured": true, 00:17:23.884 "data_offset": 2048, 00:17:23.884 "data_size": 63488 00:17:23.884 } 00:17:23.884 ] 00:17:23.884 }' 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.884 07:49:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.453 "name": "raid_bdev1", 00:17:24.453 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:24.453 "strip_size_kb": 64, 00:17:24.453 "state": "online", 00:17:24.453 "raid_level": "raid5f", 00:17:24.453 "superblock": true, 00:17:24.453 "num_base_bdevs": 4, 00:17:24.453 "num_base_bdevs_discovered": 4, 00:17:24.453 "num_base_bdevs_operational": 4, 00:17:24.453 "base_bdevs_list": [ 00:17:24.453 { 00:17:24.453 "name": "spare", 00:17:24.453 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:24.453 "is_configured": true, 00:17:24.453 "data_offset": 2048, 00:17:24.453 "data_size": 63488 00:17:24.453 }, 00:17:24.453 { 00:17:24.453 "name": "BaseBdev2", 00:17:24.453 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:24.453 "is_configured": true, 00:17:24.453 "data_offset": 2048, 00:17:24.453 "data_size": 63488 00:17:24.453 }, 00:17:24.453 { 00:17:24.453 "name": "BaseBdev3", 00:17:24.453 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:24.453 "is_configured": true, 00:17:24.453 "data_offset": 2048, 00:17:24.453 "data_size": 63488 00:17:24.453 }, 00:17:24.453 { 00:17:24.453 "name": "BaseBdev4", 00:17:24.453 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:24.453 "is_configured": true, 00:17:24.453 "data_offset": 2048, 00:17:24.453 "data_size": 63488 00:17:24.453 } 00:17:24.453 ] 00:17:24.453 }' 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.453 [2024-11-29 07:49:14.341046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.453 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.454 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.713 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.713 "name": "raid_bdev1", 00:17:24.713 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:24.713 "strip_size_kb": 64, 00:17:24.713 "state": "online", 00:17:24.713 "raid_level": "raid5f", 00:17:24.713 "superblock": true, 00:17:24.714 "num_base_bdevs": 4, 00:17:24.714 "num_base_bdevs_discovered": 3, 00:17:24.714 "num_base_bdevs_operational": 3, 00:17:24.714 "base_bdevs_list": [ 00:17:24.714 { 00:17:24.714 "name": null, 00:17:24.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.714 "is_configured": false, 00:17:24.714 "data_offset": 0, 00:17:24.714 "data_size": 63488 00:17:24.714 }, 00:17:24.714 { 00:17:24.714 "name": "BaseBdev2", 00:17:24.714 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:24.714 "is_configured": true, 00:17:24.714 "data_offset": 2048, 00:17:24.714 "data_size": 63488 00:17:24.714 }, 00:17:24.714 { 00:17:24.714 "name": "BaseBdev3", 00:17:24.714 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:24.714 "is_configured": true, 00:17:24.714 "data_offset": 2048, 00:17:24.714 "data_size": 63488 00:17:24.714 }, 00:17:24.714 { 00:17:24.714 "name": "BaseBdev4", 00:17:24.714 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:24.714 "is_configured": true, 00:17:24.714 "data_offset": 2048, 00:17:24.714 "data_size": 63488 00:17:24.714 } 00:17:24.714 ] 00:17:24.714 }' 00:17:24.714 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.714 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.974 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:24.974 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.974 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.974 [2024-11-29 07:49:14.808246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:24.974 [2024-11-29 07:49:14.808429] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:24.974 [2024-11-29 07:49:14.808450] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:24.974 [2024-11-29 07:49:14.808481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:24.974 [2024-11-29 07:49:14.823341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:24.974 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.974 07:49:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:24.974 [2024-11-29 07:49:14.832198] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:25.912 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.912 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.912 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.912 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.912 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.912 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.912 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.912 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.912 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.173 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.173 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.173 "name": "raid_bdev1", 00:17:26.173 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:26.173 "strip_size_kb": 64, 00:17:26.173 "state": "online", 00:17:26.173 "raid_level": "raid5f", 00:17:26.173 "superblock": true, 00:17:26.173 "num_base_bdevs": 4, 00:17:26.173 "num_base_bdevs_discovered": 4, 00:17:26.173 "num_base_bdevs_operational": 4, 00:17:26.173 "process": { 00:17:26.173 "type": "rebuild", 00:17:26.173 "target": "spare", 00:17:26.173 "progress": { 00:17:26.173 "blocks": 19200, 00:17:26.173 "percent": 10 00:17:26.173 } 00:17:26.173 }, 00:17:26.173 "base_bdevs_list": [ 00:17:26.173 { 00:17:26.173 "name": "spare", 00:17:26.173 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:26.173 "is_configured": true, 00:17:26.173 "data_offset": 2048, 00:17:26.173 "data_size": 63488 00:17:26.173 }, 00:17:26.173 { 00:17:26.173 "name": "BaseBdev2", 00:17:26.173 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:26.173 "is_configured": true, 00:17:26.173 "data_offset": 2048, 00:17:26.173 "data_size": 63488 00:17:26.173 }, 00:17:26.173 { 00:17:26.173 "name": "BaseBdev3", 00:17:26.173 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:26.173 "is_configured": true, 00:17:26.173 "data_offset": 2048, 00:17:26.173 "data_size": 63488 00:17:26.173 }, 00:17:26.173 { 00:17:26.173 "name": "BaseBdev4", 00:17:26.173 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:26.173 "is_configured": true, 00:17:26.173 "data_offset": 2048, 00:17:26.173 "data_size": 63488 00:17:26.173 } 00:17:26.173 ] 00:17:26.173 }' 00:17:26.173 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.173 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.173 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.173 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.173 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:26.173 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.173 07:49:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.173 [2024-11-29 07:49:15.963115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.173 [2024-11-29 07:49:16.038154] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:26.173 [2024-11-29 07:49:16.038226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.173 [2024-11-29 07:49:16.038243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.173 [2024-11-29 07:49:16.038252] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.173 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.433 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.433 "name": "raid_bdev1", 00:17:26.433 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:26.433 "strip_size_kb": 64, 00:17:26.433 "state": "online", 00:17:26.433 "raid_level": "raid5f", 00:17:26.433 "superblock": true, 00:17:26.433 "num_base_bdevs": 4, 00:17:26.433 "num_base_bdevs_discovered": 3, 00:17:26.433 "num_base_bdevs_operational": 3, 00:17:26.433 "base_bdevs_list": [ 00:17:26.433 { 00:17:26.433 "name": null, 00:17:26.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.433 "is_configured": false, 00:17:26.433 "data_offset": 0, 00:17:26.433 "data_size": 63488 00:17:26.433 }, 00:17:26.433 { 00:17:26.433 "name": "BaseBdev2", 00:17:26.433 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:26.433 "is_configured": true, 00:17:26.433 "data_offset": 2048, 00:17:26.433 "data_size": 63488 00:17:26.433 }, 00:17:26.433 { 00:17:26.433 "name": "BaseBdev3", 00:17:26.433 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:26.433 "is_configured": true, 00:17:26.433 "data_offset": 2048, 00:17:26.433 "data_size": 63488 00:17:26.433 }, 00:17:26.433 { 00:17:26.433 "name": "BaseBdev4", 00:17:26.433 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:26.433 "is_configured": true, 00:17:26.433 "data_offset": 2048, 00:17:26.433 "data_size": 63488 00:17:26.433 } 00:17:26.433 ] 00:17:26.433 }' 00:17:26.433 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.433 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.698 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:26.698 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.698 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.698 [2024-11-29 07:49:16.503481] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:26.698 [2024-11-29 07:49:16.503556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.698 [2024-11-29 07:49:16.503584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:26.698 [2024-11-29 07:49:16.503597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.698 [2024-11-29 07:49:16.504134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.698 [2024-11-29 07:49:16.504160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:26.698 [2024-11-29 07:49:16.504259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:26.698 [2024-11-29 07:49:16.504274] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:26.698 [2024-11-29 07:49:16.504284] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:26.698 [2024-11-29 07:49:16.504310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.698 [2024-11-29 07:49:16.518881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:26.698 spare 00:17:26.698 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.698 07:49:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:26.698 [2024-11-29 07:49:16.527999] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.642 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.642 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.643 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.643 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.643 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.643 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.643 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.643 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.643 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.643 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.643 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.643 "name": "raid_bdev1", 00:17:27.643 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:27.643 "strip_size_kb": 64, 00:17:27.643 "state": "online", 00:17:27.643 "raid_level": "raid5f", 00:17:27.643 "superblock": true, 00:17:27.643 "num_base_bdevs": 4, 00:17:27.643 "num_base_bdevs_discovered": 4, 00:17:27.643 "num_base_bdevs_operational": 4, 00:17:27.643 "process": { 00:17:27.643 "type": "rebuild", 00:17:27.643 "target": "spare", 00:17:27.643 "progress": { 00:17:27.643 "blocks": 19200, 00:17:27.643 "percent": 10 00:17:27.643 } 00:17:27.643 }, 00:17:27.643 "base_bdevs_list": [ 00:17:27.643 { 00:17:27.643 "name": "spare", 00:17:27.643 "uuid": "63d1dbd3-575c-5d51-ad76-dc365208473b", 00:17:27.643 "is_configured": true, 00:17:27.643 "data_offset": 2048, 00:17:27.643 "data_size": 63488 00:17:27.643 }, 00:17:27.643 { 00:17:27.643 "name": "BaseBdev2", 00:17:27.643 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:27.643 "is_configured": true, 00:17:27.643 "data_offset": 2048, 00:17:27.643 "data_size": 63488 00:17:27.643 }, 00:17:27.643 { 00:17:27.643 "name": "BaseBdev3", 00:17:27.643 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:27.643 "is_configured": true, 00:17:27.643 "data_offset": 2048, 00:17:27.643 "data_size": 63488 00:17:27.643 }, 00:17:27.643 { 00:17:27.643 "name": "BaseBdev4", 00:17:27.643 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:27.643 "is_configured": true, 00:17:27.643 "data_offset": 2048, 00:17:27.643 "data_size": 63488 00:17:27.643 } 00:17:27.643 ] 00:17:27.643 }' 00:17:27.643 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.903 [2024-11-29 07:49:17.678870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.903 [2024-11-29 07:49:17.734092] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:27.903 [2024-11-29 07:49:17.734166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.903 [2024-11-29 07:49:17.734185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.903 [2024-11-29 07:49:17.734192] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.903 "name": "raid_bdev1", 00:17:27.903 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:27.903 "strip_size_kb": 64, 00:17:27.903 "state": "online", 00:17:27.903 "raid_level": "raid5f", 00:17:27.903 "superblock": true, 00:17:27.903 "num_base_bdevs": 4, 00:17:27.903 "num_base_bdevs_discovered": 3, 00:17:27.903 "num_base_bdevs_operational": 3, 00:17:27.903 "base_bdevs_list": [ 00:17:27.903 { 00:17:27.903 "name": null, 00:17:27.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.903 "is_configured": false, 00:17:27.903 "data_offset": 0, 00:17:27.903 "data_size": 63488 00:17:27.903 }, 00:17:27.903 { 00:17:27.903 "name": "BaseBdev2", 00:17:27.903 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:27.903 "is_configured": true, 00:17:27.903 "data_offset": 2048, 00:17:27.903 "data_size": 63488 00:17:27.903 }, 00:17:27.903 { 00:17:27.903 "name": "BaseBdev3", 00:17:27.903 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:27.903 "is_configured": true, 00:17:27.903 "data_offset": 2048, 00:17:27.903 "data_size": 63488 00:17:27.903 }, 00:17:27.903 { 00:17:27.903 "name": "BaseBdev4", 00:17:27.903 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:27.903 "is_configured": true, 00:17:27.903 "data_offset": 2048, 00:17:27.903 "data_size": 63488 00:17:27.903 } 00:17:27.903 ] 00:17:27.903 }' 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.903 07:49:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.521 "name": "raid_bdev1", 00:17:28.521 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:28.521 "strip_size_kb": 64, 00:17:28.521 "state": "online", 00:17:28.521 "raid_level": "raid5f", 00:17:28.521 "superblock": true, 00:17:28.521 "num_base_bdevs": 4, 00:17:28.521 "num_base_bdevs_discovered": 3, 00:17:28.521 "num_base_bdevs_operational": 3, 00:17:28.521 "base_bdevs_list": [ 00:17:28.521 { 00:17:28.521 "name": null, 00:17:28.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.521 "is_configured": false, 00:17:28.521 "data_offset": 0, 00:17:28.521 "data_size": 63488 00:17:28.521 }, 00:17:28.521 { 00:17:28.521 "name": "BaseBdev2", 00:17:28.521 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:28.521 "is_configured": true, 00:17:28.521 "data_offset": 2048, 00:17:28.521 "data_size": 63488 00:17:28.521 }, 00:17:28.521 { 00:17:28.521 "name": "BaseBdev3", 00:17:28.521 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:28.521 "is_configured": true, 00:17:28.521 "data_offset": 2048, 00:17:28.521 "data_size": 63488 00:17:28.521 }, 00:17:28.521 { 00:17:28.521 "name": "BaseBdev4", 00:17:28.521 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:28.521 "is_configured": true, 00:17:28.521 "data_offset": 2048, 00:17:28.521 "data_size": 63488 00:17:28.521 } 00:17:28.521 ] 00:17:28.521 }' 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.521 [2024-11-29 07:49:18.358058] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:28.521 [2024-11-29 07:49:18.358129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.521 [2024-11-29 07:49:18.358153] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:28.521 [2024-11-29 07:49:18.358163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.521 [2024-11-29 07:49:18.358634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.521 [2024-11-29 07:49:18.358653] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:28.521 [2024-11-29 07:49:18.358732] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:28.521 [2024-11-29 07:49:18.358746] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:28.521 [2024-11-29 07:49:18.358757] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:28.521 [2024-11-29 07:49:18.358768] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:28.521 BaseBdev1 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.521 07:49:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.474 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.734 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.734 "name": "raid_bdev1", 00:17:29.734 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:29.734 "strip_size_kb": 64, 00:17:29.734 "state": "online", 00:17:29.734 "raid_level": "raid5f", 00:17:29.734 "superblock": true, 00:17:29.734 "num_base_bdevs": 4, 00:17:29.734 "num_base_bdevs_discovered": 3, 00:17:29.734 "num_base_bdevs_operational": 3, 00:17:29.734 "base_bdevs_list": [ 00:17:29.734 { 00:17:29.734 "name": null, 00:17:29.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.734 "is_configured": false, 00:17:29.734 "data_offset": 0, 00:17:29.734 "data_size": 63488 00:17:29.734 }, 00:17:29.734 { 00:17:29.734 "name": "BaseBdev2", 00:17:29.734 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:29.734 "is_configured": true, 00:17:29.734 "data_offset": 2048, 00:17:29.734 "data_size": 63488 00:17:29.734 }, 00:17:29.734 { 00:17:29.734 "name": "BaseBdev3", 00:17:29.734 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:29.734 "is_configured": true, 00:17:29.734 "data_offset": 2048, 00:17:29.734 "data_size": 63488 00:17:29.734 }, 00:17:29.734 { 00:17:29.734 "name": "BaseBdev4", 00:17:29.734 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:29.734 "is_configured": true, 00:17:29.734 "data_offset": 2048, 00:17:29.734 "data_size": 63488 00:17:29.734 } 00:17:29.734 ] 00:17:29.734 }' 00:17:29.734 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.734 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.993 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.993 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.993 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.993 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.993 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.993 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.993 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.993 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.993 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.993 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.993 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.993 "name": "raid_bdev1", 00:17:29.993 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:29.993 "strip_size_kb": 64, 00:17:29.993 "state": "online", 00:17:29.993 "raid_level": "raid5f", 00:17:29.993 "superblock": true, 00:17:29.994 "num_base_bdevs": 4, 00:17:29.994 "num_base_bdevs_discovered": 3, 00:17:29.994 "num_base_bdevs_operational": 3, 00:17:29.994 "base_bdevs_list": [ 00:17:29.994 { 00:17:29.994 "name": null, 00:17:29.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.994 "is_configured": false, 00:17:29.994 "data_offset": 0, 00:17:29.994 "data_size": 63488 00:17:29.994 }, 00:17:29.994 { 00:17:29.994 "name": "BaseBdev2", 00:17:29.994 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:29.994 "is_configured": true, 00:17:29.994 "data_offset": 2048, 00:17:29.994 "data_size": 63488 00:17:29.994 }, 00:17:29.994 { 00:17:29.994 "name": "BaseBdev3", 00:17:29.994 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:29.994 "is_configured": true, 00:17:29.994 "data_offset": 2048, 00:17:29.994 "data_size": 63488 00:17:29.994 }, 00:17:29.994 { 00:17:29.994 "name": "BaseBdev4", 00:17:29.994 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:29.994 "is_configured": true, 00:17:29.994 "data_offset": 2048, 00:17:29.994 "data_size": 63488 00:17:29.994 } 00:17:29.994 ] 00:17:29.994 }' 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.994 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.253 [2024-11-29 07:49:19.939475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.253 [2024-11-29 07:49:19.939653] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:30.253 [2024-11-29 07:49:19.939668] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:30.253 request: 00:17:30.253 { 00:17:30.253 "base_bdev": "BaseBdev1", 00:17:30.253 "raid_bdev": "raid_bdev1", 00:17:30.253 "method": "bdev_raid_add_base_bdev", 00:17:30.253 "req_id": 1 00:17:30.253 } 00:17:30.253 Got JSON-RPC error response 00:17:30.253 response: 00:17:30.253 { 00:17:30.253 "code": -22, 00:17:30.253 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:30.253 } 00:17:30.253 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:30.253 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:30.253 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:30.253 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:30.253 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:30.253 07:49:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.192 "name": "raid_bdev1", 00:17:31.192 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:31.192 "strip_size_kb": 64, 00:17:31.192 "state": "online", 00:17:31.192 "raid_level": "raid5f", 00:17:31.192 "superblock": true, 00:17:31.192 "num_base_bdevs": 4, 00:17:31.192 "num_base_bdevs_discovered": 3, 00:17:31.192 "num_base_bdevs_operational": 3, 00:17:31.192 "base_bdevs_list": [ 00:17:31.192 { 00:17:31.192 "name": null, 00:17:31.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.192 "is_configured": false, 00:17:31.192 "data_offset": 0, 00:17:31.192 "data_size": 63488 00:17:31.192 }, 00:17:31.192 { 00:17:31.192 "name": "BaseBdev2", 00:17:31.192 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:31.192 "is_configured": true, 00:17:31.192 "data_offset": 2048, 00:17:31.192 "data_size": 63488 00:17:31.192 }, 00:17:31.192 { 00:17:31.192 "name": "BaseBdev3", 00:17:31.192 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:31.192 "is_configured": true, 00:17:31.192 "data_offset": 2048, 00:17:31.192 "data_size": 63488 00:17:31.192 }, 00:17:31.192 { 00:17:31.192 "name": "BaseBdev4", 00:17:31.192 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:31.192 "is_configured": true, 00:17:31.192 "data_offset": 2048, 00:17:31.192 "data_size": 63488 00:17:31.192 } 00:17:31.192 ] 00:17:31.192 }' 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.192 07:49:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.452 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.452 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.452 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.452 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.452 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.452 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.452 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.452 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.452 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.712 "name": "raid_bdev1", 00:17:31.712 "uuid": "21a36fdc-247e-485e-a60a-bf5744e0a4cf", 00:17:31.712 "strip_size_kb": 64, 00:17:31.712 "state": "online", 00:17:31.712 "raid_level": "raid5f", 00:17:31.712 "superblock": true, 00:17:31.712 "num_base_bdevs": 4, 00:17:31.712 "num_base_bdevs_discovered": 3, 00:17:31.712 "num_base_bdevs_operational": 3, 00:17:31.712 "base_bdevs_list": [ 00:17:31.712 { 00:17:31.712 "name": null, 00:17:31.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.712 "is_configured": false, 00:17:31.712 "data_offset": 0, 00:17:31.712 "data_size": 63488 00:17:31.712 }, 00:17:31.712 { 00:17:31.712 "name": "BaseBdev2", 00:17:31.712 "uuid": "d7664040-fb3e-563d-9b2c-59ae6ee8b550", 00:17:31.712 "is_configured": true, 00:17:31.712 "data_offset": 2048, 00:17:31.712 "data_size": 63488 00:17:31.712 }, 00:17:31.712 { 00:17:31.712 "name": "BaseBdev3", 00:17:31.712 "uuid": "843254ec-98a8-5951-951d-f9cf790754ab", 00:17:31.712 "is_configured": true, 00:17:31.712 "data_offset": 2048, 00:17:31.712 "data_size": 63488 00:17:31.712 }, 00:17:31.712 { 00:17:31.712 "name": "BaseBdev4", 00:17:31.712 "uuid": "e475f6f4-16a8-5687-8a1f-8c97f63fcaa3", 00:17:31.712 "is_configured": true, 00:17:31.712 "data_offset": 2048, 00:17:31.712 "data_size": 63488 00:17:31.712 } 00:17:31.712 ] 00:17:31.712 }' 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84789 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84789 ']' 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84789 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84789 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.712 killing process with pid 84789 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84789' 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84789 00:17:31.712 Received shutdown signal, test time was about 60.000000 seconds 00:17:31.712 00:17:31.712 Latency(us) 00:17:31.712 [2024-11-29T07:49:21.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.712 [2024-11-29T07:49:21.657Z] =================================================================================================================== 00:17:31.712 [2024-11-29T07:49:21.657Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.712 [2024-11-29 07:49:21.563222] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.712 [2024-11-29 07:49:21.563339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.712 07:49:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84789 00:17:31.712 [2024-11-29 07:49:21.563412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.712 [2024-11-29 07:49:21.563425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:32.281 [2024-11-29 07:49:22.027546] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.219 07:49:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:33.219 00:17:33.219 real 0m26.668s 00:17:33.219 user 0m33.533s 00:17:33.219 sys 0m2.841s 00:17:33.219 07:49:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.219 07:49:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.219 ************************************ 00:17:33.219 END TEST raid5f_rebuild_test_sb 00:17:33.219 ************************************ 00:17:33.219 07:49:23 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:33.219 07:49:23 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:33.219 07:49:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:33.219 07:49:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.219 07:49:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:33.219 ************************************ 00:17:33.219 START TEST raid_state_function_test_sb_4k 00:17:33.219 ************************************ 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85599 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:33.219 Process raid pid: 85599 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85599' 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85599 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85599 ']' 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.219 07:49:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.479 [2024-11-29 07:49:23.243920] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:33.479 [2024-11-29 07:49:23.244032] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.479 [2024-11-29 07:49:23.420150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.738 [2024-11-29 07:49:23.531011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.997 [2024-11-29 07:49:23.725525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.997 [2024-11-29 07:49:23.725561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.257 [2024-11-29 07:49:24.058224] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:34.257 [2024-11-29 07:49:24.058273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:34.257 [2024-11-29 07:49:24.058283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:34.257 [2024-11-29 07:49:24.058293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.257 "name": "Existed_Raid", 00:17:34.257 "uuid": "efcb5870-f0db-4780-8a9d-b9c9cbc0b2de", 00:17:34.257 "strip_size_kb": 0, 00:17:34.257 "state": "configuring", 00:17:34.257 "raid_level": "raid1", 00:17:34.257 "superblock": true, 00:17:34.257 "num_base_bdevs": 2, 00:17:34.257 "num_base_bdevs_discovered": 0, 00:17:34.257 "num_base_bdevs_operational": 2, 00:17:34.257 "base_bdevs_list": [ 00:17:34.257 { 00:17:34.257 "name": "BaseBdev1", 00:17:34.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.257 "is_configured": false, 00:17:34.257 "data_offset": 0, 00:17:34.257 "data_size": 0 00:17:34.257 }, 00:17:34.257 { 00:17:34.257 "name": "BaseBdev2", 00:17:34.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.257 "is_configured": false, 00:17:34.257 "data_offset": 0, 00:17:34.257 "data_size": 0 00:17:34.257 } 00:17:34.257 ] 00:17:34.257 }' 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.257 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.826 [2024-11-29 07:49:24.469434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:34.826 [2024-11-29 07:49:24.469469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.826 [2024-11-29 07:49:24.481420] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:34.826 [2024-11-29 07:49:24.481457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:34.826 [2024-11-29 07:49:24.481465] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:34.826 [2024-11-29 07:49:24.481476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.826 [2024-11-29 07:49:24.530224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.826 BaseBdev1 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.826 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.826 [ 00:17:34.826 { 00:17:34.826 "name": "BaseBdev1", 00:17:34.826 "aliases": [ 00:17:34.826 "e6560478-3e69-4acb-b756-939aed6fcc57" 00:17:34.826 ], 00:17:34.826 "product_name": "Malloc disk", 00:17:34.826 "block_size": 4096, 00:17:34.826 "num_blocks": 8192, 00:17:34.826 "uuid": "e6560478-3e69-4acb-b756-939aed6fcc57", 00:17:34.826 "assigned_rate_limits": { 00:17:34.826 "rw_ios_per_sec": 0, 00:17:34.826 "rw_mbytes_per_sec": 0, 00:17:34.826 "r_mbytes_per_sec": 0, 00:17:34.826 "w_mbytes_per_sec": 0 00:17:34.826 }, 00:17:34.826 "claimed": true, 00:17:34.827 "claim_type": "exclusive_write", 00:17:34.827 "zoned": false, 00:17:34.827 "supported_io_types": { 00:17:34.827 "read": true, 00:17:34.827 "write": true, 00:17:34.827 "unmap": true, 00:17:34.827 "flush": true, 00:17:34.827 "reset": true, 00:17:34.827 "nvme_admin": false, 00:17:34.827 "nvme_io": false, 00:17:34.827 "nvme_io_md": false, 00:17:34.827 "write_zeroes": true, 00:17:34.827 "zcopy": true, 00:17:34.827 "get_zone_info": false, 00:17:34.827 "zone_management": false, 00:17:34.827 "zone_append": false, 00:17:34.827 "compare": false, 00:17:34.827 "compare_and_write": false, 00:17:34.827 "abort": true, 00:17:34.827 "seek_hole": false, 00:17:34.827 "seek_data": false, 00:17:34.827 "copy": true, 00:17:34.827 "nvme_iov_md": false 00:17:34.827 }, 00:17:34.827 "memory_domains": [ 00:17:34.827 { 00:17:34.827 "dma_device_id": "system", 00:17:34.827 "dma_device_type": 1 00:17:34.827 }, 00:17:34.827 { 00:17:34.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.827 "dma_device_type": 2 00:17:34.827 } 00:17:34.827 ], 00:17:34.827 "driver_specific": {} 00:17:34.827 } 00:17:34.827 ] 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.827 "name": "Existed_Raid", 00:17:34.827 "uuid": "cec6c03b-8a21-4cfa-bab6-30be6d71886d", 00:17:34.827 "strip_size_kb": 0, 00:17:34.827 "state": "configuring", 00:17:34.827 "raid_level": "raid1", 00:17:34.827 "superblock": true, 00:17:34.827 "num_base_bdevs": 2, 00:17:34.827 "num_base_bdevs_discovered": 1, 00:17:34.827 "num_base_bdevs_operational": 2, 00:17:34.827 "base_bdevs_list": [ 00:17:34.827 { 00:17:34.827 "name": "BaseBdev1", 00:17:34.827 "uuid": "e6560478-3e69-4acb-b756-939aed6fcc57", 00:17:34.827 "is_configured": true, 00:17:34.827 "data_offset": 256, 00:17:34.827 "data_size": 7936 00:17:34.827 }, 00:17:34.827 { 00:17:34.827 "name": "BaseBdev2", 00:17:34.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.827 "is_configured": false, 00:17:34.827 "data_offset": 0, 00:17:34.827 "data_size": 0 00:17:34.827 } 00:17:34.827 ] 00:17:34.827 }' 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.827 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.086 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:35.086 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.086 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.086 [2024-11-29 07:49:24.989478] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:35.086 [2024-11-29 07:49:24.989520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:35.086 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.086 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:35.086 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.086 07:49:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.087 [2024-11-29 07:49:25.001498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.087 [2024-11-29 07:49:25.003248] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.087 [2024-11-29 07:49:25.003287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.087 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.345 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.345 "name": "Existed_Raid", 00:17:35.345 "uuid": "2ad2a37d-8198-4084-b182-e68dc7343dae", 00:17:35.345 "strip_size_kb": 0, 00:17:35.345 "state": "configuring", 00:17:35.345 "raid_level": "raid1", 00:17:35.345 "superblock": true, 00:17:35.345 "num_base_bdevs": 2, 00:17:35.345 "num_base_bdevs_discovered": 1, 00:17:35.345 "num_base_bdevs_operational": 2, 00:17:35.345 "base_bdevs_list": [ 00:17:35.345 { 00:17:35.345 "name": "BaseBdev1", 00:17:35.345 "uuid": "e6560478-3e69-4acb-b756-939aed6fcc57", 00:17:35.345 "is_configured": true, 00:17:35.345 "data_offset": 256, 00:17:35.345 "data_size": 7936 00:17:35.345 }, 00:17:35.345 { 00:17:35.345 "name": "BaseBdev2", 00:17:35.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.345 "is_configured": false, 00:17:35.345 "data_offset": 0, 00:17:35.345 "data_size": 0 00:17:35.345 } 00:17:35.345 ] 00:17:35.345 }' 00:17:35.345 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.345 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.604 [2024-11-29 07:49:25.473701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.604 [2024-11-29 07:49:25.473952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:35.604 [2024-11-29 07:49:25.473969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:35.604 [2024-11-29 07:49:25.474266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:35.604 [2024-11-29 07:49:25.474433] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:35.604 [2024-11-29 07:49:25.474453] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:35.604 BaseBdev2 00:17:35.604 [2024-11-29 07:49:25.474614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.604 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.604 [ 00:17:35.604 { 00:17:35.604 "name": "BaseBdev2", 00:17:35.604 "aliases": [ 00:17:35.604 "d86181e9-07c9-4663-8213-b74cb93b7c2e" 00:17:35.604 ], 00:17:35.604 "product_name": "Malloc disk", 00:17:35.604 "block_size": 4096, 00:17:35.604 "num_blocks": 8192, 00:17:35.604 "uuid": "d86181e9-07c9-4663-8213-b74cb93b7c2e", 00:17:35.604 "assigned_rate_limits": { 00:17:35.604 "rw_ios_per_sec": 0, 00:17:35.604 "rw_mbytes_per_sec": 0, 00:17:35.604 "r_mbytes_per_sec": 0, 00:17:35.604 "w_mbytes_per_sec": 0 00:17:35.604 }, 00:17:35.604 "claimed": true, 00:17:35.604 "claim_type": "exclusive_write", 00:17:35.604 "zoned": false, 00:17:35.604 "supported_io_types": { 00:17:35.604 "read": true, 00:17:35.604 "write": true, 00:17:35.604 "unmap": true, 00:17:35.604 "flush": true, 00:17:35.604 "reset": true, 00:17:35.604 "nvme_admin": false, 00:17:35.604 "nvme_io": false, 00:17:35.604 "nvme_io_md": false, 00:17:35.604 "write_zeroes": true, 00:17:35.604 "zcopy": true, 00:17:35.604 "get_zone_info": false, 00:17:35.604 "zone_management": false, 00:17:35.604 "zone_append": false, 00:17:35.604 "compare": false, 00:17:35.604 "compare_and_write": false, 00:17:35.604 "abort": true, 00:17:35.604 "seek_hole": false, 00:17:35.604 "seek_data": false, 00:17:35.604 "copy": true, 00:17:35.604 "nvme_iov_md": false 00:17:35.604 }, 00:17:35.604 "memory_domains": [ 00:17:35.604 { 00:17:35.604 "dma_device_id": "system", 00:17:35.604 "dma_device_type": 1 00:17:35.604 }, 00:17:35.604 { 00:17:35.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.605 "dma_device_type": 2 00:17:35.605 } 00:17:35.605 ], 00:17:35.605 "driver_specific": {} 00:17:35.605 } 00:17:35.605 ] 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.605 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.865 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.865 "name": "Existed_Raid", 00:17:35.865 "uuid": "2ad2a37d-8198-4084-b182-e68dc7343dae", 00:17:35.865 "strip_size_kb": 0, 00:17:35.865 "state": "online", 00:17:35.865 "raid_level": "raid1", 00:17:35.865 "superblock": true, 00:17:35.865 "num_base_bdevs": 2, 00:17:35.865 "num_base_bdevs_discovered": 2, 00:17:35.865 "num_base_bdevs_operational": 2, 00:17:35.865 "base_bdevs_list": [ 00:17:35.865 { 00:17:35.865 "name": "BaseBdev1", 00:17:35.865 "uuid": "e6560478-3e69-4acb-b756-939aed6fcc57", 00:17:35.865 "is_configured": true, 00:17:35.865 "data_offset": 256, 00:17:35.865 "data_size": 7936 00:17:35.865 }, 00:17:35.865 { 00:17:35.865 "name": "BaseBdev2", 00:17:35.865 "uuid": "d86181e9-07c9-4663-8213-b74cb93b7c2e", 00:17:35.865 "is_configured": true, 00:17:35.865 "data_offset": 256, 00:17:35.865 "data_size": 7936 00:17:35.865 } 00:17:35.865 ] 00:17:35.865 }' 00:17:35.865 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.865 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.124 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:36.124 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:36.124 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:36.124 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:36.124 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:36.124 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:36.124 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:36.124 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.124 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:36.124 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.125 [2024-11-29 07:49:25.977122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.125 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.125 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:36.125 "name": "Existed_Raid", 00:17:36.125 "aliases": [ 00:17:36.125 "2ad2a37d-8198-4084-b182-e68dc7343dae" 00:17:36.125 ], 00:17:36.125 "product_name": "Raid Volume", 00:17:36.125 "block_size": 4096, 00:17:36.125 "num_blocks": 7936, 00:17:36.125 "uuid": "2ad2a37d-8198-4084-b182-e68dc7343dae", 00:17:36.125 "assigned_rate_limits": { 00:17:36.125 "rw_ios_per_sec": 0, 00:17:36.125 "rw_mbytes_per_sec": 0, 00:17:36.125 "r_mbytes_per_sec": 0, 00:17:36.125 "w_mbytes_per_sec": 0 00:17:36.125 }, 00:17:36.125 "claimed": false, 00:17:36.125 "zoned": false, 00:17:36.125 "supported_io_types": { 00:17:36.125 "read": true, 00:17:36.125 "write": true, 00:17:36.125 "unmap": false, 00:17:36.125 "flush": false, 00:17:36.125 "reset": true, 00:17:36.125 "nvme_admin": false, 00:17:36.125 "nvme_io": false, 00:17:36.125 "nvme_io_md": false, 00:17:36.125 "write_zeroes": true, 00:17:36.125 "zcopy": false, 00:17:36.125 "get_zone_info": false, 00:17:36.125 "zone_management": false, 00:17:36.125 "zone_append": false, 00:17:36.125 "compare": false, 00:17:36.125 "compare_and_write": false, 00:17:36.125 "abort": false, 00:17:36.125 "seek_hole": false, 00:17:36.125 "seek_data": false, 00:17:36.125 "copy": false, 00:17:36.125 "nvme_iov_md": false 00:17:36.125 }, 00:17:36.125 "memory_domains": [ 00:17:36.125 { 00:17:36.125 "dma_device_id": "system", 00:17:36.125 "dma_device_type": 1 00:17:36.125 }, 00:17:36.125 { 00:17:36.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.125 "dma_device_type": 2 00:17:36.125 }, 00:17:36.125 { 00:17:36.125 "dma_device_id": "system", 00:17:36.125 "dma_device_type": 1 00:17:36.125 }, 00:17:36.125 { 00:17:36.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.125 "dma_device_type": 2 00:17:36.125 } 00:17:36.125 ], 00:17:36.125 "driver_specific": { 00:17:36.125 "raid": { 00:17:36.125 "uuid": "2ad2a37d-8198-4084-b182-e68dc7343dae", 00:17:36.125 "strip_size_kb": 0, 00:17:36.125 "state": "online", 00:17:36.125 "raid_level": "raid1", 00:17:36.125 "superblock": true, 00:17:36.125 "num_base_bdevs": 2, 00:17:36.125 "num_base_bdevs_discovered": 2, 00:17:36.125 "num_base_bdevs_operational": 2, 00:17:36.125 "base_bdevs_list": [ 00:17:36.125 { 00:17:36.125 "name": "BaseBdev1", 00:17:36.125 "uuid": "e6560478-3e69-4acb-b756-939aed6fcc57", 00:17:36.125 "is_configured": true, 00:17:36.125 "data_offset": 256, 00:17:36.125 "data_size": 7936 00:17:36.125 }, 00:17:36.125 { 00:17:36.125 "name": "BaseBdev2", 00:17:36.125 "uuid": "d86181e9-07c9-4663-8213-b74cb93b7c2e", 00:17:36.125 "is_configured": true, 00:17:36.125 "data_offset": 256, 00:17:36.125 "data_size": 7936 00:17:36.125 } 00:17:36.125 ] 00:17:36.125 } 00:17:36.125 } 00:17:36.125 }' 00:17:36.125 07:49:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:36.125 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:36.125 BaseBdev2' 00:17:36.125 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.385 [2024-11-29 07:49:26.200490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.385 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.645 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.645 "name": "Existed_Raid", 00:17:36.645 "uuid": "2ad2a37d-8198-4084-b182-e68dc7343dae", 00:17:36.645 "strip_size_kb": 0, 00:17:36.645 "state": "online", 00:17:36.645 "raid_level": "raid1", 00:17:36.645 "superblock": true, 00:17:36.645 "num_base_bdevs": 2, 00:17:36.645 "num_base_bdevs_discovered": 1, 00:17:36.645 "num_base_bdevs_operational": 1, 00:17:36.645 "base_bdevs_list": [ 00:17:36.645 { 00:17:36.645 "name": null, 00:17:36.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.645 "is_configured": false, 00:17:36.645 "data_offset": 0, 00:17:36.645 "data_size": 7936 00:17:36.645 }, 00:17:36.645 { 00:17:36.645 "name": "BaseBdev2", 00:17:36.645 "uuid": "d86181e9-07c9-4663-8213-b74cb93b7c2e", 00:17:36.645 "is_configured": true, 00:17:36.645 "data_offset": 256, 00:17:36.645 "data_size": 7936 00:17:36.645 } 00:17:36.645 ] 00:17:36.645 }' 00:17:36.645 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.645 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.905 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:36.905 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:36.905 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.905 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.905 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.905 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:36.905 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.905 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:36.905 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:36.905 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:36.905 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.905 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.905 [2024-11-29 07:49:26.786854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:36.905 [2024-11-29 07:49:26.786958] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.164 [2024-11-29 07:49:26.876922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.164 [2024-11-29 07:49:26.876973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.164 [2024-11-29 07:49:26.876985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:37.164 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.164 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:37.164 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:37.164 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.164 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:37.164 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.164 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.164 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85599 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85599 ']' 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85599 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85599 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.165 killing process with pid 85599 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85599' 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85599 00:17:37.165 [2024-11-29 07:49:26.969113] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.165 07:49:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85599 00:17:37.165 [2024-11-29 07:49:26.984844] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:38.104 07:49:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:38.104 00:17:38.104 real 0m4.898s 00:17:38.104 user 0m7.092s 00:17:38.104 sys 0m0.823s 00:17:38.104 07:49:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.104 07:49:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.104 ************************************ 00:17:38.104 END TEST raid_state_function_test_sb_4k 00:17:38.104 ************************************ 00:17:38.364 07:49:28 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:38.364 07:49:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:38.364 07:49:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.364 07:49:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:38.364 ************************************ 00:17:38.364 START TEST raid_superblock_test_4k 00:17:38.364 ************************************ 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85846 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85846 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85846 ']' 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.364 07:49:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.364 [2024-11-29 07:49:28.217433] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:38.364 [2024-11-29 07:49:28.217545] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85846 ] 00:17:38.624 [2024-11-29 07:49:28.392742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.624 [2024-11-29 07:49:28.497297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.883 [2024-11-29 07:49:28.697756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.883 [2024-11-29 07:49:28.697785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.143 malloc1 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.143 [2024-11-29 07:49:29.071753] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:39.143 [2024-11-29 07:49:29.071808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.143 [2024-11-29 07:49:29.071828] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:39.143 [2024-11-29 07:49:29.071836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.143 [2024-11-29 07:49:29.073878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.143 [2024-11-29 07:49:29.073913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:39.143 pt1 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.143 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.403 malloc2 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.403 [2024-11-29 07:49:29.124953] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:39.403 [2024-11-29 07:49:29.125001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.403 [2024-11-29 07:49:29.125025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:39.403 [2024-11-29 07:49:29.125034] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.403 [2024-11-29 07:49:29.127038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.403 [2024-11-29 07:49:29.127070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:39.403 pt2 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.403 [2024-11-29 07:49:29.136978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:39.403 [2024-11-29 07:49:29.138719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:39.403 [2024-11-29 07:49:29.138893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:39.403 [2024-11-29 07:49:29.138910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:39.403 [2024-11-29 07:49:29.139170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:39.403 [2024-11-29 07:49:29.139330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:39.403 [2024-11-29 07:49:29.139345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:39.403 [2024-11-29 07:49:29.139482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.403 "name": "raid_bdev1", 00:17:39.403 "uuid": "602d2246-e8de-4de7-9e97-843f4d73d4d3", 00:17:39.403 "strip_size_kb": 0, 00:17:39.403 "state": "online", 00:17:39.403 "raid_level": "raid1", 00:17:39.403 "superblock": true, 00:17:39.403 "num_base_bdevs": 2, 00:17:39.403 "num_base_bdevs_discovered": 2, 00:17:39.403 "num_base_bdevs_operational": 2, 00:17:39.403 "base_bdevs_list": [ 00:17:39.403 { 00:17:39.403 "name": "pt1", 00:17:39.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:39.403 "is_configured": true, 00:17:39.403 "data_offset": 256, 00:17:39.403 "data_size": 7936 00:17:39.403 }, 00:17:39.403 { 00:17:39.403 "name": "pt2", 00:17:39.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.403 "is_configured": true, 00:17:39.403 "data_offset": 256, 00:17:39.403 "data_size": 7936 00:17:39.403 } 00:17:39.403 ] 00:17:39.403 }' 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.403 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.663 [2024-11-29 07:49:29.560448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:39.663 "name": "raid_bdev1", 00:17:39.663 "aliases": [ 00:17:39.663 "602d2246-e8de-4de7-9e97-843f4d73d4d3" 00:17:39.663 ], 00:17:39.663 "product_name": "Raid Volume", 00:17:39.663 "block_size": 4096, 00:17:39.663 "num_blocks": 7936, 00:17:39.663 "uuid": "602d2246-e8de-4de7-9e97-843f4d73d4d3", 00:17:39.663 "assigned_rate_limits": { 00:17:39.663 "rw_ios_per_sec": 0, 00:17:39.663 "rw_mbytes_per_sec": 0, 00:17:39.663 "r_mbytes_per_sec": 0, 00:17:39.663 "w_mbytes_per_sec": 0 00:17:39.663 }, 00:17:39.663 "claimed": false, 00:17:39.663 "zoned": false, 00:17:39.663 "supported_io_types": { 00:17:39.663 "read": true, 00:17:39.663 "write": true, 00:17:39.663 "unmap": false, 00:17:39.663 "flush": false, 00:17:39.663 "reset": true, 00:17:39.663 "nvme_admin": false, 00:17:39.663 "nvme_io": false, 00:17:39.663 "nvme_io_md": false, 00:17:39.663 "write_zeroes": true, 00:17:39.663 "zcopy": false, 00:17:39.663 "get_zone_info": false, 00:17:39.663 "zone_management": false, 00:17:39.663 "zone_append": false, 00:17:39.663 "compare": false, 00:17:39.663 "compare_and_write": false, 00:17:39.663 "abort": false, 00:17:39.663 "seek_hole": false, 00:17:39.663 "seek_data": false, 00:17:39.663 "copy": false, 00:17:39.663 "nvme_iov_md": false 00:17:39.663 }, 00:17:39.663 "memory_domains": [ 00:17:39.663 { 00:17:39.663 "dma_device_id": "system", 00:17:39.663 "dma_device_type": 1 00:17:39.663 }, 00:17:39.663 { 00:17:39.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.663 "dma_device_type": 2 00:17:39.663 }, 00:17:39.663 { 00:17:39.663 "dma_device_id": "system", 00:17:39.663 "dma_device_type": 1 00:17:39.663 }, 00:17:39.663 { 00:17:39.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.663 "dma_device_type": 2 00:17:39.663 } 00:17:39.663 ], 00:17:39.663 "driver_specific": { 00:17:39.663 "raid": { 00:17:39.663 "uuid": "602d2246-e8de-4de7-9e97-843f4d73d4d3", 00:17:39.663 "strip_size_kb": 0, 00:17:39.663 "state": "online", 00:17:39.663 "raid_level": "raid1", 00:17:39.663 "superblock": true, 00:17:39.663 "num_base_bdevs": 2, 00:17:39.663 "num_base_bdevs_discovered": 2, 00:17:39.663 "num_base_bdevs_operational": 2, 00:17:39.663 "base_bdevs_list": [ 00:17:39.663 { 00:17:39.663 "name": "pt1", 00:17:39.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:39.663 "is_configured": true, 00:17:39.663 "data_offset": 256, 00:17:39.663 "data_size": 7936 00:17:39.663 }, 00:17:39.663 { 00:17:39.663 "name": "pt2", 00:17:39.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.663 "is_configured": true, 00:17:39.663 "data_offset": 256, 00:17:39.663 "data_size": 7936 00:17:39.663 } 00:17:39.663 ] 00:17:39.663 } 00:17:39.663 } 00:17:39.663 }' 00:17:39.663 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:39.923 pt2' 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:39.923 [2024-11-29 07:49:29.788272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=602d2246-e8de-4de7-9e97-843f4d73d4d3 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 602d2246-e8de-4de7-9e97-843f4d73d4d3 ']' 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.923 [2024-11-29 07:49:29.831922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.923 [2024-11-29 07:49:29.831948] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.923 [2024-11-29 07:49:29.832026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.923 [2024-11-29 07:49:29.832086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.923 [2024-11-29 07:49:29.832119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.923 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.182 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:40.182 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:40.182 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.182 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:40.182 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.182 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.182 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.182 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.183 [2024-11-29 07:49:29.967692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:40.183 [2024-11-29 07:49:29.969651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:40.183 [2024-11-29 07:49:29.969720] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:40.183 [2024-11-29 07:49:29.969761] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:40.183 [2024-11-29 07:49:29.969775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.183 [2024-11-29 07:49:29.969785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:40.183 request: 00:17:40.183 { 00:17:40.183 "name": "raid_bdev1", 00:17:40.183 "raid_level": "raid1", 00:17:40.183 "base_bdevs": [ 00:17:40.183 "malloc1", 00:17:40.183 "malloc2" 00:17:40.183 ], 00:17:40.183 "superblock": false, 00:17:40.183 "method": "bdev_raid_create", 00:17:40.183 "req_id": 1 00:17:40.183 } 00:17:40.183 Got JSON-RPC error response 00:17:40.183 response: 00:17:40.183 { 00:17:40.183 "code": -17, 00:17:40.183 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:40.183 } 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.183 07:49:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.183 [2024-11-29 07:49:30.023588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:40.183 [2024-11-29 07:49:30.023635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.183 [2024-11-29 07:49:30.023653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:40.183 [2024-11-29 07:49:30.023663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.183 [2024-11-29 07:49:30.025791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.183 [2024-11-29 07:49:30.025827] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:40.183 [2024-11-29 07:49:30.025897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:40.183 [2024-11-29 07:49:30.025950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:40.183 pt1 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.183 "name": "raid_bdev1", 00:17:40.183 "uuid": "602d2246-e8de-4de7-9e97-843f4d73d4d3", 00:17:40.183 "strip_size_kb": 0, 00:17:40.183 "state": "configuring", 00:17:40.183 "raid_level": "raid1", 00:17:40.183 "superblock": true, 00:17:40.183 "num_base_bdevs": 2, 00:17:40.183 "num_base_bdevs_discovered": 1, 00:17:40.183 "num_base_bdevs_operational": 2, 00:17:40.183 "base_bdevs_list": [ 00:17:40.183 { 00:17:40.183 "name": "pt1", 00:17:40.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.183 "is_configured": true, 00:17:40.183 "data_offset": 256, 00:17:40.183 "data_size": 7936 00:17:40.183 }, 00:17:40.183 { 00:17:40.183 "name": null, 00:17:40.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.183 "is_configured": false, 00:17:40.183 "data_offset": 256, 00:17:40.183 "data_size": 7936 00:17:40.183 } 00:17:40.183 ] 00:17:40.183 }' 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.183 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.752 [2024-11-29 07:49:30.410928] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:40.752 [2024-11-29 07:49:30.410997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.752 [2024-11-29 07:49:30.411017] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:40.752 [2024-11-29 07:49:30.411026] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.752 [2024-11-29 07:49:30.411451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.752 [2024-11-29 07:49:30.411477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:40.752 [2024-11-29 07:49:30.411544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:40.752 [2024-11-29 07:49:30.411567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:40.752 [2024-11-29 07:49:30.411684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:40.752 [2024-11-29 07:49:30.411695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:40.752 [2024-11-29 07:49:30.411945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:40.752 [2024-11-29 07:49:30.412099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:40.752 [2024-11-29 07:49:30.412108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:40.752 [2024-11-29 07:49:30.412253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.752 pt2 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.752 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.752 "name": "raid_bdev1", 00:17:40.752 "uuid": "602d2246-e8de-4de7-9e97-843f4d73d4d3", 00:17:40.752 "strip_size_kb": 0, 00:17:40.752 "state": "online", 00:17:40.752 "raid_level": "raid1", 00:17:40.752 "superblock": true, 00:17:40.753 "num_base_bdevs": 2, 00:17:40.753 "num_base_bdevs_discovered": 2, 00:17:40.753 "num_base_bdevs_operational": 2, 00:17:40.753 "base_bdevs_list": [ 00:17:40.753 { 00:17:40.753 "name": "pt1", 00:17:40.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.753 "is_configured": true, 00:17:40.753 "data_offset": 256, 00:17:40.753 "data_size": 7936 00:17:40.753 }, 00:17:40.753 { 00:17:40.753 "name": "pt2", 00:17:40.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.753 "is_configured": true, 00:17:40.753 "data_offset": 256, 00:17:40.753 "data_size": 7936 00:17:40.753 } 00:17:40.753 ] 00:17:40.753 }' 00:17:40.753 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.753 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.013 [2024-11-29 07:49:30.886352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:41.013 "name": "raid_bdev1", 00:17:41.013 "aliases": [ 00:17:41.013 "602d2246-e8de-4de7-9e97-843f4d73d4d3" 00:17:41.013 ], 00:17:41.013 "product_name": "Raid Volume", 00:17:41.013 "block_size": 4096, 00:17:41.013 "num_blocks": 7936, 00:17:41.013 "uuid": "602d2246-e8de-4de7-9e97-843f4d73d4d3", 00:17:41.013 "assigned_rate_limits": { 00:17:41.013 "rw_ios_per_sec": 0, 00:17:41.013 "rw_mbytes_per_sec": 0, 00:17:41.013 "r_mbytes_per_sec": 0, 00:17:41.013 "w_mbytes_per_sec": 0 00:17:41.013 }, 00:17:41.013 "claimed": false, 00:17:41.013 "zoned": false, 00:17:41.013 "supported_io_types": { 00:17:41.013 "read": true, 00:17:41.013 "write": true, 00:17:41.013 "unmap": false, 00:17:41.013 "flush": false, 00:17:41.013 "reset": true, 00:17:41.013 "nvme_admin": false, 00:17:41.013 "nvme_io": false, 00:17:41.013 "nvme_io_md": false, 00:17:41.013 "write_zeroes": true, 00:17:41.013 "zcopy": false, 00:17:41.013 "get_zone_info": false, 00:17:41.013 "zone_management": false, 00:17:41.013 "zone_append": false, 00:17:41.013 "compare": false, 00:17:41.013 "compare_and_write": false, 00:17:41.013 "abort": false, 00:17:41.013 "seek_hole": false, 00:17:41.013 "seek_data": false, 00:17:41.013 "copy": false, 00:17:41.013 "nvme_iov_md": false 00:17:41.013 }, 00:17:41.013 "memory_domains": [ 00:17:41.013 { 00:17:41.013 "dma_device_id": "system", 00:17:41.013 "dma_device_type": 1 00:17:41.013 }, 00:17:41.013 { 00:17:41.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.013 "dma_device_type": 2 00:17:41.013 }, 00:17:41.013 { 00:17:41.013 "dma_device_id": "system", 00:17:41.013 "dma_device_type": 1 00:17:41.013 }, 00:17:41.013 { 00:17:41.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.013 "dma_device_type": 2 00:17:41.013 } 00:17:41.013 ], 00:17:41.013 "driver_specific": { 00:17:41.013 "raid": { 00:17:41.013 "uuid": "602d2246-e8de-4de7-9e97-843f4d73d4d3", 00:17:41.013 "strip_size_kb": 0, 00:17:41.013 "state": "online", 00:17:41.013 "raid_level": "raid1", 00:17:41.013 "superblock": true, 00:17:41.013 "num_base_bdevs": 2, 00:17:41.013 "num_base_bdevs_discovered": 2, 00:17:41.013 "num_base_bdevs_operational": 2, 00:17:41.013 "base_bdevs_list": [ 00:17:41.013 { 00:17:41.013 "name": "pt1", 00:17:41.013 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.013 "is_configured": true, 00:17:41.013 "data_offset": 256, 00:17:41.013 "data_size": 7936 00:17:41.013 }, 00:17:41.013 { 00:17:41.013 "name": "pt2", 00:17:41.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.013 "is_configured": true, 00:17:41.013 "data_offset": 256, 00:17:41.013 "data_size": 7936 00:17:41.013 } 00:17:41.013 ] 00:17:41.013 } 00:17:41.013 } 00:17:41.013 }' 00:17:41.013 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:41.272 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:41.272 pt2' 00:17:41.272 07:49:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.272 [2024-11-29 07:49:31.117919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 602d2246-e8de-4de7-9e97-843f4d73d4d3 '!=' 602d2246-e8de-4de7-9e97-843f4d73d4d3 ']' 00:17:41.272 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.273 [2024-11-29 07:49:31.157662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.273 "name": "raid_bdev1", 00:17:41.273 "uuid": "602d2246-e8de-4de7-9e97-843f4d73d4d3", 00:17:41.273 "strip_size_kb": 0, 00:17:41.273 "state": "online", 00:17:41.273 "raid_level": "raid1", 00:17:41.273 "superblock": true, 00:17:41.273 "num_base_bdevs": 2, 00:17:41.273 "num_base_bdevs_discovered": 1, 00:17:41.273 "num_base_bdevs_operational": 1, 00:17:41.273 "base_bdevs_list": [ 00:17:41.273 { 00:17:41.273 "name": null, 00:17:41.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.273 "is_configured": false, 00:17:41.273 "data_offset": 0, 00:17:41.273 "data_size": 7936 00:17:41.273 }, 00:17:41.273 { 00:17:41.273 "name": "pt2", 00:17:41.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.273 "is_configured": true, 00:17:41.273 "data_offset": 256, 00:17:41.273 "data_size": 7936 00:17:41.273 } 00:17:41.273 ] 00:17:41.273 }' 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.273 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.841 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.842 [2024-11-29 07:49:31.608880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.842 [2024-11-29 07:49:31.608914] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.842 [2024-11-29 07:49:31.608995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.842 [2024-11-29 07:49:31.609042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.842 [2024-11-29 07:49:31.609054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.842 [2024-11-29 07:49:31.680758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:41.842 [2024-11-29 07:49:31.680826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.842 [2024-11-29 07:49:31.680843] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:41.842 [2024-11-29 07:49:31.680854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.842 [2024-11-29 07:49:31.682964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.842 [2024-11-29 07:49:31.683001] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:41.842 [2024-11-29 07:49:31.683082] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:41.842 [2024-11-29 07:49:31.683151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.842 [2024-11-29 07:49:31.683251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:41.842 [2024-11-29 07:49:31.683267] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:41.842 [2024-11-29 07:49:31.683490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:41.842 [2024-11-29 07:49:31.683651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:41.842 [2024-11-29 07:49:31.683664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:41.842 [2024-11-29 07:49:31.683807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.842 pt2 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.842 "name": "raid_bdev1", 00:17:41.842 "uuid": "602d2246-e8de-4de7-9e97-843f4d73d4d3", 00:17:41.842 "strip_size_kb": 0, 00:17:41.842 "state": "online", 00:17:41.842 "raid_level": "raid1", 00:17:41.842 "superblock": true, 00:17:41.842 "num_base_bdevs": 2, 00:17:41.842 "num_base_bdevs_discovered": 1, 00:17:41.842 "num_base_bdevs_operational": 1, 00:17:41.842 "base_bdevs_list": [ 00:17:41.842 { 00:17:41.842 "name": null, 00:17:41.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.842 "is_configured": false, 00:17:41.842 "data_offset": 256, 00:17:41.842 "data_size": 7936 00:17:41.842 }, 00:17:41.842 { 00:17:41.842 "name": "pt2", 00:17:41.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.842 "is_configured": true, 00:17:41.842 "data_offset": 256, 00:17:41.842 "data_size": 7936 00:17:41.842 } 00:17:41.842 ] 00:17:41.842 }' 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.842 07:49:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.412 [2024-11-29 07:49:32.164006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.412 [2024-11-29 07:49:32.164040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.412 [2024-11-29 07:49:32.164128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.412 [2024-11-29 07:49:32.164178] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.412 [2024-11-29 07:49:32.164192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.412 [2024-11-29 07:49:32.215978] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:42.412 [2024-11-29 07:49:32.216034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.412 [2024-11-29 07:49:32.216053] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:42.412 [2024-11-29 07:49:32.216062] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.412 [2024-11-29 07:49:32.218246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.412 [2024-11-29 07:49:32.218281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:42.412 [2024-11-29 07:49:32.218361] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:42.412 [2024-11-29 07:49:32.218435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:42.412 [2024-11-29 07:49:32.218585] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:42.412 [2024-11-29 07:49:32.218601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.412 [2024-11-29 07:49:32.218616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:42.412 [2024-11-29 07:49:32.218681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.412 [2024-11-29 07:49:32.218752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:42.412 [2024-11-29 07:49:32.218764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:42.412 [2024-11-29 07:49:32.219008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:42.412 [2024-11-29 07:49:32.219161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:42.412 [2024-11-29 07:49:32.219186] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:42.412 [2024-11-29 07:49:32.219334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.412 pt1 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.412 "name": "raid_bdev1", 00:17:42.412 "uuid": "602d2246-e8de-4de7-9e97-843f4d73d4d3", 00:17:42.412 "strip_size_kb": 0, 00:17:42.412 "state": "online", 00:17:42.412 "raid_level": "raid1", 00:17:42.412 "superblock": true, 00:17:42.412 "num_base_bdevs": 2, 00:17:42.412 "num_base_bdevs_discovered": 1, 00:17:42.412 "num_base_bdevs_operational": 1, 00:17:42.412 "base_bdevs_list": [ 00:17:42.412 { 00:17:42.412 "name": null, 00:17:42.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.412 "is_configured": false, 00:17:42.412 "data_offset": 256, 00:17:42.412 "data_size": 7936 00:17:42.412 }, 00:17:42.412 { 00:17:42.412 "name": "pt2", 00:17:42.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.412 "is_configured": true, 00:17:42.412 "data_offset": 256, 00:17:42.412 "data_size": 7936 00:17:42.412 } 00:17:42.412 ] 00:17:42.412 }' 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.412 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.982 [2024-11-29 07:49:32.707377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 602d2246-e8de-4de7-9e97-843f4d73d4d3 '!=' 602d2246-e8de-4de7-9e97-843f4d73d4d3 ']' 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85846 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85846 ']' 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85846 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:42.982 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.983 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85846 00:17:42.983 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.983 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.983 killing process with pid 85846 00:17:42.983 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85846' 00:17:42.983 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85846 00:17:42.983 [2024-11-29 07:49:32.786987] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.983 [2024-11-29 07:49:32.787086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.983 [2024-11-29 07:49:32.787147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.983 [2024-11-29 07:49:32.787163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:42.983 07:49:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85846 00:17:43.242 [2024-11-29 07:49:32.986125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.185 07:49:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:44.185 00:17:44.185 real 0m5.934s 00:17:44.185 user 0m9.015s 00:17:44.185 sys 0m1.075s 00:17:44.185 07:49:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.185 07:49:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.185 ************************************ 00:17:44.185 END TEST raid_superblock_test_4k 00:17:44.185 ************************************ 00:17:44.185 07:49:34 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:44.185 07:49:34 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:44.185 07:49:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:44.185 07:49:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.185 07:49:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:44.460 ************************************ 00:17:44.460 START TEST raid_rebuild_test_sb_4k 00:17:44.460 ************************************ 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86169 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86169 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86169 ']' 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.460 07:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.460 [2024-11-29 07:49:34.240947] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:44.460 [2024-11-29 07:49:34.241167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:44.460 Zero copy mechanism will not be used. 00:17:44.460 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86169 ] 00:17:44.460 [2024-11-29 07:49:34.396390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.720 [2024-11-29 07:49:34.503246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.979 [2024-11-29 07:49:34.709762] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.979 [2024-11-29 07:49:34.709912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.253 BaseBdev1_malloc 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.253 [2024-11-29 07:49:35.107880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:45.253 [2024-11-29 07:49:35.108055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.253 [2024-11-29 07:49:35.108082] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:45.253 [2024-11-29 07:49:35.108093] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.253 [2024-11-29 07:49:35.110179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.253 [2024-11-29 07:49:35.110219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:45.253 BaseBdev1 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.253 BaseBdev2_malloc 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.253 [2024-11-29 07:49:35.163521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:45.253 [2024-11-29 07:49:35.163675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.253 [2024-11-29 07:49:35.163716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:45.253 [2024-11-29 07:49:35.163756] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.253 [2024-11-29 07:49:35.165815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.253 [2024-11-29 07:49:35.165903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:45.253 BaseBdev2 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.253 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.520 spare_malloc 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.520 spare_delay 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.520 [2024-11-29 07:49:35.242137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:45.520 [2024-11-29 07:49:35.242280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.520 [2024-11-29 07:49:35.242335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:45.520 [2024-11-29 07:49:35.242367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.520 [2024-11-29 07:49:35.244418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.520 [2024-11-29 07:49:35.244498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:45.520 spare 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.520 [2024-11-29 07:49:35.254174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.520 [2024-11-29 07:49:35.255976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.520 [2024-11-29 07:49:35.256222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:45.520 [2024-11-29 07:49:35.256271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:45.520 [2024-11-29 07:49:35.256530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:45.520 [2024-11-29 07:49:35.256727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:45.520 [2024-11-29 07:49:35.256769] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:45.520 [2024-11-29 07:49:35.256959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.520 "name": "raid_bdev1", 00:17:45.520 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:45.520 "strip_size_kb": 0, 00:17:45.520 "state": "online", 00:17:45.520 "raid_level": "raid1", 00:17:45.520 "superblock": true, 00:17:45.520 "num_base_bdevs": 2, 00:17:45.520 "num_base_bdevs_discovered": 2, 00:17:45.520 "num_base_bdevs_operational": 2, 00:17:45.520 "base_bdevs_list": [ 00:17:45.520 { 00:17:45.520 "name": "BaseBdev1", 00:17:45.520 "uuid": "ecab60e2-6675-5065-9c9e-f96d02bf6161", 00:17:45.520 "is_configured": true, 00:17:45.520 "data_offset": 256, 00:17:45.520 "data_size": 7936 00:17:45.520 }, 00:17:45.520 { 00:17:45.520 "name": "BaseBdev2", 00:17:45.520 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:45.520 "is_configured": true, 00:17:45.520 "data_offset": 256, 00:17:45.520 "data_size": 7936 00:17:45.520 } 00:17:45.520 ] 00:17:45.520 }' 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.520 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.779 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:45.779 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:45.779 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.779 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.779 [2024-11-29 07:49:35.701657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.779 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.038 07:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:46.038 [2024-11-29 07:49:35.968982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:46.322 /dev/nbd0 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.322 1+0 records in 00:17:46.322 1+0 records out 00:17:46.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556156 s, 7.4 MB/s 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:46.322 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:46.889 7936+0 records in 00:17:46.889 7936+0 records out 00:17:46.889 32505856 bytes (33 MB, 31 MiB) copied, 0.596284 s, 54.5 MB/s 00:17:46.889 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:46.889 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.889 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:46.889 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:46.889 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:46.889 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:46.889 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:47.149 [2024-11-29 07:49:36.856206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.149 [2024-11-29 07:49:36.868305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.149 "name": "raid_bdev1", 00:17:47.149 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:47.149 "strip_size_kb": 0, 00:17:47.149 "state": "online", 00:17:47.149 "raid_level": "raid1", 00:17:47.149 "superblock": true, 00:17:47.149 "num_base_bdevs": 2, 00:17:47.149 "num_base_bdevs_discovered": 1, 00:17:47.149 "num_base_bdevs_operational": 1, 00:17:47.149 "base_bdevs_list": [ 00:17:47.149 { 00:17:47.149 "name": null, 00:17:47.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.149 "is_configured": false, 00:17:47.149 "data_offset": 0, 00:17:47.149 "data_size": 7936 00:17:47.149 }, 00:17:47.149 { 00:17:47.149 "name": "BaseBdev2", 00:17:47.149 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:47.149 "is_configured": true, 00:17:47.149 "data_offset": 256, 00:17:47.149 "data_size": 7936 00:17:47.149 } 00:17:47.149 ] 00:17:47.149 }' 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.149 07:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.408 07:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:47.408 07:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.408 07:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.408 [2024-11-29 07:49:37.291600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.408 [2024-11-29 07:49:37.307762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:47.408 07:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.408 07:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:47.408 [2024-11-29 07:49:37.309654] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:48.786 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.786 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.786 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.786 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.786 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.786 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.786 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.787 "name": "raid_bdev1", 00:17:48.787 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:48.787 "strip_size_kb": 0, 00:17:48.787 "state": "online", 00:17:48.787 "raid_level": "raid1", 00:17:48.787 "superblock": true, 00:17:48.787 "num_base_bdevs": 2, 00:17:48.787 "num_base_bdevs_discovered": 2, 00:17:48.787 "num_base_bdevs_operational": 2, 00:17:48.787 "process": { 00:17:48.787 "type": "rebuild", 00:17:48.787 "target": "spare", 00:17:48.787 "progress": { 00:17:48.787 "blocks": 2560, 00:17:48.787 "percent": 32 00:17:48.787 } 00:17:48.787 }, 00:17:48.787 "base_bdevs_list": [ 00:17:48.787 { 00:17:48.787 "name": "spare", 00:17:48.787 "uuid": "05ccef9b-5357-5b30-b38d-34b4599f3244", 00:17:48.787 "is_configured": true, 00:17:48.787 "data_offset": 256, 00:17:48.787 "data_size": 7936 00:17:48.787 }, 00:17:48.787 { 00:17:48.787 "name": "BaseBdev2", 00:17:48.787 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:48.787 "is_configured": true, 00:17:48.787 "data_offset": 256, 00:17:48.787 "data_size": 7936 00:17:48.787 } 00:17:48.787 ] 00:17:48.787 }' 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.787 [2024-11-29 07:49:38.472780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.787 [2024-11-29 07:49:38.514910] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:48.787 [2024-11-29 07:49:38.515044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.787 [2024-11-29 07:49:38.515080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.787 [2024-11-29 07:49:38.515110] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.787 "name": "raid_bdev1", 00:17:48.787 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:48.787 "strip_size_kb": 0, 00:17:48.787 "state": "online", 00:17:48.787 "raid_level": "raid1", 00:17:48.787 "superblock": true, 00:17:48.787 "num_base_bdevs": 2, 00:17:48.787 "num_base_bdevs_discovered": 1, 00:17:48.787 "num_base_bdevs_operational": 1, 00:17:48.787 "base_bdevs_list": [ 00:17:48.787 { 00:17:48.787 "name": null, 00:17:48.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.787 "is_configured": false, 00:17:48.787 "data_offset": 0, 00:17:48.787 "data_size": 7936 00:17:48.787 }, 00:17:48.787 { 00:17:48.787 "name": "BaseBdev2", 00:17:48.787 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:48.787 "is_configured": true, 00:17:48.787 "data_offset": 256, 00:17:48.787 "data_size": 7936 00:17:48.787 } 00:17:48.787 ] 00:17:48.787 }' 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.787 07:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.354 "name": "raid_bdev1", 00:17:49.354 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:49.354 "strip_size_kb": 0, 00:17:49.354 "state": "online", 00:17:49.354 "raid_level": "raid1", 00:17:49.354 "superblock": true, 00:17:49.354 "num_base_bdevs": 2, 00:17:49.354 "num_base_bdevs_discovered": 1, 00:17:49.354 "num_base_bdevs_operational": 1, 00:17:49.354 "base_bdevs_list": [ 00:17:49.354 { 00:17:49.354 "name": null, 00:17:49.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.354 "is_configured": false, 00:17:49.354 "data_offset": 0, 00:17:49.354 "data_size": 7936 00:17:49.354 }, 00:17:49.354 { 00:17:49.354 "name": "BaseBdev2", 00:17:49.354 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:49.354 "is_configured": true, 00:17:49.354 "data_offset": 256, 00:17:49.354 "data_size": 7936 00:17:49.354 } 00:17:49.354 ] 00:17:49.354 }' 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.354 [2024-11-29 07:49:39.160670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.354 [2024-11-29 07:49:39.176500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.354 07:49:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:49.354 [2024-11-29 07:49:39.178338] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.293 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.293 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.293 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.293 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.293 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.293 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.293 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.293 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.293 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.293 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.293 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.293 "name": "raid_bdev1", 00:17:50.293 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:50.293 "strip_size_kb": 0, 00:17:50.293 "state": "online", 00:17:50.293 "raid_level": "raid1", 00:17:50.293 "superblock": true, 00:17:50.293 "num_base_bdevs": 2, 00:17:50.293 "num_base_bdevs_discovered": 2, 00:17:50.293 "num_base_bdevs_operational": 2, 00:17:50.293 "process": { 00:17:50.293 "type": "rebuild", 00:17:50.293 "target": "spare", 00:17:50.293 "progress": { 00:17:50.293 "blocks": 2560, 00:17:50.293 "percent": 32 00:17:50.293 } 00:17:50.293 }, 00:17:50.293 "base_bdevs_list": [ 00:17:50.293 { 00:17:50.293 "name": "spare", 00:17:50.293 "uuid": "05ccef9b-5357-5b30-b38d-34b4599f3244", 00:17:50.293 "is_configured": true, 00:17:50.293 "data_offset": 256, 00:17:50.293 "data_size": 7936 00:17:50.293 }, 00:17:50.293 { 00:17:50.293 "name": "BaseBdev2", 00:17:50.293 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:50.293 "is_configured": true, 00:17:50.293 "data_offset": 256, 00:17:50.293 "data_size": 7936 00:17:50.293 } 00:17:50.293 ] 00:17:50.293 }' 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:50.553 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=659 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.553 "name": "raid_bdev1", 00:17:50.553 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:50.553 "strip_size_kb": 0, 00:17:50.553 "state": "online", 00:17:50.553 "raid_level": "raid1", 00:17:50.553 "superblock": true, 00:17:50.553 "num_base_bdevs": 2, 00:17:50.553 "num_base_bdevs_discovered": 2, 00:17:50.553 "num_base_bdevs_operational": 2, 00:17:50.553 "process": { 00:17:50.553 "type": "rebuild", 00:17:50.553 "target": "spare", 00:17:50.553 "progress": { 00:17:50.553 "blocks": 2816, 00:17:50.553 "percent": 35 00:17:50.553 } 00:17:50.553 }, 00:17:50.553 "base_bdevs_list": [ 00:17:50.553 { 00:17:50.553 "name": "spare", 00:17:50.553 "uuid": "05ccef9b-5357-5b30-b38d-34b4599f3244", 00:17:50.553 "is_configured": true, 00:17:50.553 "data_offset": 256, 00:17:50.553 "data_size": 7936 00:17:50.553 }, 00:17:50.553 { 00:17:50.553 "name": "BaseBdev2", 00:17:50.553 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:50.553 "is_configured": true, 00:17:50.553 "data_offset": 256, 00:17:50.553 "data_size": 7936 00:17:50.553 } 00:17:50.553 ] 00:17:50.553 }' 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.553 07:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:51.491 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:51.491 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.491 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.491 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.491 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.491 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.750 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.750 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.750 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.750 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.750 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.750 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.750 "name": "raid_bdev1", 00:17:51.750 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:51.750 "strip_size_kb": 0, 00:17:51.750 "state": "online", 00:17:51.750 "raid_level": "raid1", 00:17:51.750 "superblock": true, 00:17:51.750 "num_base_bdevs": 2, 00:17:51.750 "num_base_bdevs_discovered": 2, 00:17:51.750 "num_base_bdevs_operational": 2, 00:17:51.750 "process": { 00:17:51.750 "type": "rebuild", 00:17:51.750 "target": "spare", 00:17:51.750 "progress": { 00:17:51.750 "blocks": 5632, 00:17:51.750 "percent": 70 00:17:51.750 } 00:17:51.750 }, 00:17:51.750 "base_bdevs_list": [ 00:17:51.750 { 00:17:51.750 "name": "spare", 00:17:51.750 "uuid": "05ccef9b-5357-5b30-b38d-34b4599f3244", 00:17:51.750 "is_configured": true, 00:17:51.750 "data_offset": 256, 00:17:51.750 "data_size": 7936 00:17:51.750 }, 00:17:51.750 { 00:17:51.750 "name": "BaseBdev2", 00:17:51.750 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:51.750 "is_configured": true, 00:17:51.750 "data_offset": 256, 00:17:51.750 "data_size": 7936 00:17:51.750 } 00:17:51.750 ] 00:17:51.751 }' 00:17:51.751 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.751 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.751 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.751 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.751 07:49:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:52.689 [2024-11-29 07:49:42.291446] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:52.689 [2024-11-29 07:49:42.291509] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:52.689 [2024-11-29 07:49:42.291601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.689 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:52.689 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.689 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.689 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.689 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.689 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.689 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.689 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.689 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.689 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.689 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.949 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.949 "name": "raid_bdev1", 00:17:52.949 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:52.950 "strip_size_kb": 0, 00:17:52.950 "state": "online", 00:17:52.950 "raid_level": "raid1", 00:17:52.950 "superblock": true, 00:17:52.950 "num_base_bdevs": 2, 00:17:52.950 "num_base_bdevs_discovered": 2, 00:17:52.950 "num_base_bdevs_operational": 2, 00:17:52.950 "base_bdevs_list": [ 00:17:52.950 { 00:17:52.950 "name": "spare", 00:17:52.950 "uuid": "05ccef9b-5357-5b30-b38d-34b4599f3244", 00:17:52.950 "is_configured": true, 00:17:52.950 "data_offset": 256, 00:17:52.950 "data_size": 7936 00:17:52.950 }, 00:17:52.950 { 00:17:52.950 "name": "BaseBdev2", 00:17:52.950 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:52.950 "is_configured": true, 00:17:52.950 "data_offset": 256, 00:17:52.950 "data_size": 7936 00:17:52.950 } 00:17:52.950 ] 00:17:52.950 }' 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.950 "name": "raid_bdev1", 00:17:52.950 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:52.950 "strip_size_kb": 0, 00:17:52.950 "state": "online", 00:17:52.950 "raid_level": "raid1", 00:17:52.950 "superblock": true, 00:17:52.950 "num_base_bdevs": 2, 00:17:52.950 "num_base_bdevs_discovered": 2, 00:17:52.950 "num_base_bdevs_operational": 2, 00:17:52.950 "base_bdevs_list": [ 00:17:52.950 { 00:17:52.950 "name": "spare", 00:17:52.950 "uuid": "05ccef9b-5357-5b30-b38d-34b4599f3244", 00:17:52.950 "is_configured": true, 00:17:52.950 "data_offset": 256, 00:17:52.950 "data_size": 7936 00:17:52.950 }, 00:17:52.950 { 00:17:52.950 "name": "BaseBdev2", 00:17:52.950 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:52.950 "is_configured": true, 00:17:52.950 "data_offset": 256, 00:17:52.950 "data_size": 7936 00:17:52.950 } 00:17:52.950 ] 00:17:52.950 }' 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.950 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.210 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.210 "name": "raid_bdev1", 00:17:53.210 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:53.210 "strip_size_kb": 0, 00:17:53.210 "state": "online", 00:17:53.210 "raid_level": "raid1", 00:17:53.210 "superblock": true, 00:17:53.210 "num_base_bdevs": 2, 00:17:53.210 "num_base_bdevs_discovered": 2, 00:17:53.210 "num_base_bdevs_operational": 2, 00:17:53.210 "base_bdevs_list": [ 00:17:53.210 { 00:17:53.210 "name": "spare", 00:17:53.210 "uuid": "05ccef9b-5357-5b30-b38d-34b4599f3244", 00:17:53.210 "is_configured": true, 00:17:53.210 "data_offset": 256, 00:17:53.210 "data_size": 7936 00:17:53.210 }, 00:17:53.210 { 00:17:53.210 "name": "BaseBdev2", 00:17:53.210 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:53.210 "is_configured": true, 00:17:53.210 "data_offset": 256, 00:17:53.210 "data_size": 7936 00:17:53.210 } 00:17:53.210 ] 00:17:53.210 }' 00:17:53.211 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.211 07:49:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.471 [2024-11-29 07:49:43.340314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.471 [2024-11-29 07:49:43.340393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.471 [2024-11-29 07:49:43.340488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.471 [2024-11-29 07:49:43.340570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.471 [2024-11-29 07:49:43.340618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:53.471 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:53.732 /dev/nbd0 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:53.732 1+0 records in 00:17:53.732 1+0 records out 00:17:53.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050118 s, 8.2 MB/s 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:53.732 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:53.992 /dev/nbd1 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:53.992 1+0 records in 00:17:53.992 1+0 records out 00:17:53.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307168 s, 13.3 MB/s 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:53.992 07:49:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:54.252 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:54.252 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:54.252 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:54.252 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:54.252 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:54.252 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:54.252 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.512 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.772 [2024-11-29 07:49:44.468087] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:54.772 [2024-11-29 07:49:44.468213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.772 [2024-11-29 07:49:44.468258] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:54.772 [2024-11-29 07:49:44.468287] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.772 [2024-11-29 07:49:44.470481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.772 [2024-11-29 07:49:44.470568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:54.772 [2024-11-29 07:49:44.470712] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:54.772 [2024-11-29 07:49:44.470785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:54.772 [2024-11-29 07:49:44.470976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.772 spare 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.772 [2024-11-29 07:49:44.570917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:54.772 [2024-11-29 07:49:44.570985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:54.772 [2024-11-29 07:49:44.571348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:54.772 [2024-11-29 07:49:44.571586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:54.772 [2024-11-29 07:49:44.571633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:54.772 [2024-11-29 07:49:44.571858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.772 "name": "raid_bdev1", 00:17:54.772 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:54.772 "strip_size_kb": 0, 00:17:54.772 "state": "online", 00:17:54.772 "raid_level": "raid1", 00:17:54.772 "superblock": true, 00:17:54.772 "num_base_bdevs": 2, 00:17:54.772 "num_base_bdevs_discovered": 2, 00:17:54.772 "num_base_bdevs_operational": 2, 00:17:54.772 "base_bdevs_list": [ 00:17:54.772 { 00:17:54.772 "name": "spare", 00:17:54.772 "uuid": "05ccef9b-5357-5b30-b38d-34b4599f3244", 00:17:54.772 "is_configured": true, 00:17:54.772 "data_offset": 256, 00:17:54.772 "data_size": 7936 00:17:54.772 }, 00:17:54.772 { 00:17:54.772 "name": "BaseBdev2", 00:17:54.772 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:54.772 "is_configured": true, 00:17:54.772 "data_offset": 256, 00:17:54.772 "data_size": 7936 00:17:54.772 } 00:17:54.772 ] 00:17:54.772 }' 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.772 07:49:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.344 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.344 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.344 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:55.344 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:55.344 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.344 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.344 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.344 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.345 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.345 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.345 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.345 "name": "raid_bdev1", 00:17:55.345 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:55.345 "strip_size_kb": 0, 00:17:55.345 "state": "online", 00:17:55.345 "raid_level": "raid1", 00:17:55.345 "superblock": true, 00:17:55.345 "num_base_bdevs": 2, 00:17:55.345 "num_base_bdevs_discovered": 2, 00:17:55.345 "num_base_bdevs_operational": 2, 00:17:55.345 "base_bdevs_list": [ 00:17:55.345 { 00:17:55.345 "name": "spare", 00:17:55.345 "uuid": "05ccef9b-5357-5b30-b38d-34b4599f3244", 00:17:55.345 "is_configured": true, 00:17:55.345 "data_offset": 256, 00:17:55.345 "data_size": 7936 00:17:55.345 }, 00:17:55.345 { 00:17:55.345 "name": "BaseBdev2", 00:17:55.346 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:55.346 "is_configured": true, 00:17:55.346 "data_offset": 256, 00:17:55.346 "data_size": 7936 00:17:55.346 } 00:17:55.346 ] 00:17:55.346 }' 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.346 [2024-11-29 07:49:45.246843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:55.346 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.347 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.347 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.347 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.347 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.347 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.347 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.347 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.347 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.609 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.609 "name": "raid_bdev1", 00:17:55.609 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:55.609 "strip_size_kb": 0, 00:17:55.609 "state": "online", 00:17:55.609 "raid_level": "raid1", 00:17:55.609 "superblock": true, 00:17:55.609 "num_base_bdevs": 2, 00:17:55.609 "num_base_bdevs_discovered": 1, 00:17:55.609 "num_base_bdevs_operational": 1, 00:17:55.609 "base_bdevs_list": [ 00:17:55.609 { 00:17:55.609 "name": null, 00:17:55.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.609 "is_configured": false, 00:17:55.609 "data_offset": 0, 00:17:55.609 "data_size": 7936 00:17:55.609 }, 00:17:55.609 { 00:17:55.609 "name": "BaseBdev2", 00:17:55.609 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:55.609 "is_configured": true, 00:17:55.609 "data_offset": 256, 00:17:55.609 "data_size": 7936 00:17:55.609 } 00:17:55.609 ] 00:17:55.609 }' 00:17:55.609 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.609 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.869 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:55.869 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.869 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.869 [2024-11-29 07:49:45.678149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:55.869 [2024-11-29 07:49:45.678401] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:55.869 [2024-11-29 07:49:45.678465] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:55.869 [2024-11-29 07:49:45.678522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:55.869 [2024-11-29 07:49:45.694604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:55.869 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.869 07:49:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:55.869 [2024-11-29 07:49:45.696416] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:56.808 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.808 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.808 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.808 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.808 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.808 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.808 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.808 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.808 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.808 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.808 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.808 "name": "raid_bdev1", 00:17:56.808 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:56.808 "strip_size_kb": 0, 00:17:56.808 "state": "online", 00:17:56.808 "raid_level": "raid1", 00:17:56.808 "superblock": true, 00:17:56.808 "num_base_bdevs": 2, 00:17:56.808 "num_base_bdevs_discovered": 2, 00:17:56.808 "num_base_bdevs_operational": 2, 00:17:56.808 "process": { 00:17:56.808 "type": "rebuild", 00:17:56.808 "target": "spare", 00:17:56.808 "progress": { 00:17:56.808 "blocks": 2560, 00:17:56.808 "percent": 32 00:17:56.808 } 00:17:56.808 }, 00:17:56.808 "base_bdevs_list": [ 00:17:56.808 { 00:17:56.808 "name": "spare", 00:17:56.808 "uuid": "05ccef9b-5357-5b30-b38d-34b4599f3244", 00:17:56.808 "is_configured": true, 00:17:56.808 "data_offset": 256, 00:17:56.808 "data_size": 7936 00:17:56.808 }, 00:17:56.808 { 00:17:56.808 "name": "BaseBdev2", 00:17:56.808 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:56.808 "is_configured": true, 00:17:56.808 "data_offset": 256, 00:17:56.808 "data_size": 7936 00:17:56.808 } 00:17:56.808 ] 00:17:56.808 }' 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.069 [2024-11-29 07:49:46.856174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.069 [2024-11-29 07:49:46.901075] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:57.069 [2024-11-29 07:49:46.901224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.069 [2024-11-29 07:49:46.901273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.069 [2024-11-29 07:49:46.901297] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.069 "name": "raid_bdev1", 00:17:57.069 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:57.069 "strip_size_kb": 0, 00:17:57.069 "state": "online", 00:17:57.069 "raid_level": "raid1", 00:17:57.069 "superblock": true, 00:17:57.069 "num_base_bdevs": 2, 00:17:57.069 "num_base_bdevs_discovered": 1, 00:17:57.069 "num_base_bdevs_operational": 1, 00:17:57.069 "base_bdevs_list": [ 00:17:57.069 { 00:17:57.069 "name": null, 00:17:57.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.069 "is_configured": false, 00:17:57.069 "data_offset": 0, 00:17:57.069 "data_size": 7936 00:17:57.069 }, 00:17:57.069 { 00:17:57.069 "name": "BaseBdev2", 00:17:57.069 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:57.069 "is_configured": true, 00:17:57.069 "data_offset": 256, 00:17:57.069 "data_size": 7936 00:17:57.069 } 00:17:57.069 ] 00:17:57.069 }' 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.069 07:49:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.639 07:49:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:57.639 07:49:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.639 07:49:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.639 [2024-11-29 07:49:47.434531] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:57.639 [2024-11-29 07:49:47.434658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.639 [2024-11-29 07:49:47.434698] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:57.639 [2024-11-29 07:49:47.434728] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.639 [2024-11-29 07:49:47.435216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.639 [2024-11-29 07:49:47.435284] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:57.639 [2024-11-29 07:49:47.435406] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:57.639 [2024-11-29 07:49:47.435449] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:57.639 [2024-11-29 07:49:47.435486] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:57.639 [2024-11-29 07:49:47.435532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.639 [2024-11-29 07:49:47.449998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:57.639 spare 00:17:57.639 07:49:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.639 07:49:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:57.639 [2024-11-29 07:49:47.451813] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:58.579 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.579 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.579 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.579 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.579 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.579 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.579 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.579 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.579 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.579 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.579 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.579 "name": "raid_bdev1", 00:17:58.579 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:58.579 "strip_size_kb": 0, 00:17:58.579 "state": "online", 00:17:58.579 "raid_level": "raid1", 00:17:58.579 "superblock": true, 00:17:58.579 "num_base_bdevs": 2, 00:17:58.579 "num_base_bdevs_discovered": 2, 00:17:58.579 "num_base_bdevs_operational": 2, 00:17:58.579 "process": { 00:17:58.579 "type": "rebuild", 00:17:58.579 "target": "spare", 00:17:58.579 "progress": { 00:17:58.579 "blocks": 2560, 00:17:58.579 "percent": 32 00:17:58.579 } 00:17:58.579 }, 00:17:58.579 "base_bdevs_list": [ 00:17:58.579 { 00:17:58.579 "name": "spare", 00:17:58.579 "uuid": "05ccef9b-5357-5b30-b38d-34b4599f3244", 00:17:58.579 "is_configured": true, 00:17:58.579 "data_offset": 256, 00:17:58.579 "data_size": 7936 00:17:58.579 }, 00:17:58.579 { 00:17:58.579 "name": "BaseBdev2", 00:17:58.579 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:58.579 "is_configured": true, 00:17:58.579 "data_offset": 256, 00:17:58.579 "data_size": 7936 00:17:58.579 } 00:17:58.579 ] 00:17:58.579 }' 00:17:58.579 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.839 [2024-11-29 07:49:48.615522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.839 [2024-11-29 07:49:48.656453] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:58.839 [2024-11-29 07:49:48.656568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.839 [2024-11-29 07:49:48.656605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.839 [2024-11-29 07:49:48.656625] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.839 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:58.840 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.840 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.840 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.840 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.840 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.840 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.840 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.840 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.840 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.840 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.840 "name": "raid_bdev1", 00:17:58.840 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:58.840 "strip_size_kb": 0, 00:17:58.840 "state": "online", 00:17:58.840 "raid_level": "raid1", 00:17:58.840 "superblock": true, 00:17:58.840 "num_base_bdevs": 2, 00:17:58.840 "num_base_bdevs_discovered": 1, 00:17:58.840 "num_base_bdevs_operational": 1, 00:17:58.840 "base_bdevs_list": [ 00:17:58.840 { 00:17:58.840 "name": null, 00:17:58.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.840 "is_configured": false, 00:17:58.840 "data_offset": 0, 00:17:58.840 "data_size": 7936 00:17:58.840 }, 00:17:58.840 { 00:17:58.840 "name": "BaseBdev2", 00:17:58.840 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:58.840 "is_configured": true, 00:17:58.840 "data_offset": 256, 00:17:58.840 "data_size": 7936 00:17:58.840 } 00:17:58.840 ] 00:17:58.840 }' 00:17:58.840 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.840 07:49:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.409 "name": "raid_bdev1", 00:17:59.409 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:17:59.409 "strip_size_kb": 0, 00:17:59.409 "state": "online", 00:17:59.409 "raid_level": "raid1", 00:17:59.409 "superblock": true, 00:17:59.409 "num_base_bdevs": 2, 00:17:59.409 "num_base_bdevs_discovered": 1, 00:17:59.409 "num_base_bdevs_operational": 1, 00:17:59.409 "base_bdevs_list": [ 00:17:59.409 { 00:17:59.409 "name": null, 00:17:59.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.409 "is_configured": false, 00:17:59.409 "data_offset": 0, 00:17:59.409 "data_size": 7936 00:17:59.409 }, 00:17:59.409 { 00:17:59.409 "name": "BaseBdev2", 00:17:59.409 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:17:59.409 "is_configured": true, 00:17:59.409 "data_offset": 256, 00:17:59.409 "data_size": 7936 00:17:59.409 } 00:17:59.409 ] 00:17:59.409 }' 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.409 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.409 [2024-11-29 07:49:49.316795] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:59.409 [2024-11-29 07:49:49.316919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.410 [2024-11-29 07:49:49.316970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:59.410 [2024-11-29 07:49:49.317017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.410 [2024-11-29 07:49:49.317507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.410 [2024-11-29 07:49:49.317565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:59.410 [2024-11-29 07:49:49.317673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:59.410 [2024-11-29 07:49:49.317711] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:59.410 [2024-11-29 07:49:49.317752] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:59.410 [2024-11-29 07:49:49.317781] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:59.410 BaseBdev1 00:17:59.410 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.410 07:49:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.808 "name": "raid_bdev1", 00:18:00.808 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:18:00.808 "strip_size_kb": 0, 00:18:00.808 "state": "online", 00:18:00.808 "raid_level": "raid1", 00:18:00.808 "superblock": true, 00:18:00.808 "num_base_bdevs": 2, 00:18:00.808 "num_base_bdevs_discovered": 1, 00:18:00.808 "num_base_bdevs_operational": 1, 00:18:00.808 "base_bdevs_list": [ 00:18:00.808 { 00:18:00.808 "name": null, 00:18:00.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.808 "is_configured": false, 00:18:00.808 "data_offset": 0, 00:18:00.808 "data_size": 7936 00:18:00.808 }, 00:18:00.808 { 00:18:00.808 "name": "BaseBdev2", 00:18:00.808 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:18:00.808 "is_configured": true, 00:18:00.808 "data_offset": 256, 00:18:00.808 "data_size": 7936 00:18:00.808 } 00:18:00.808 ] 00:18:00.808 }' 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.808 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.068 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.068 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.068 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:01.068 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:01.068 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.068 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.068 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.068 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.068 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.068 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.068 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.068 "name": "raid_bdev1", 00:18:01.068 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:18:01.068 "strip_size_kb": 0, 00:18:01.068 "state": "online", 00:18:01.068 "raid_level": "raid1", 00:18:01.068 "superblock": true, 00:18:01.068 "num_base_bdevs": 2, 00:18:01.068 "num_base_bdevs_discovered": 1, 00:18:01.068 "num_base_bdevs_operational": 1, 00:18:01.068 "base_bdevs_list": [ 00:18:01.068 { 00:18:01.068 "name": null, 00:18:01.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.068 "is_configured": false, 00:18:01.068 "data_offset": 0, 00:18:01.068 "data_size": 7936 00:18:01.068 }, 00:18:01.068 { 00:18:01.068 "name": "BaseBdev2", 00:18:01.068 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:18:01.068 "is_configured": true, 00:18:01.068 "data_offset": 256, 00:18:01.068 "data_size": 7936 00:18:01.069 } 00:18:01.069 ] 00:18:01.069 }' 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.069 [2024-11-29 07:49:50.898393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.069 [2024-11-29 07:49:50.898624] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:01.069 [2024-11-29 07:49:50.898688] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:01.069 request: 00:18:01.069 { 00:18:01.069 "base_bdev": "BaseBdev1", 00:18:01.069 "raid_bdev": "raid_bdev1", 00:18:01.069 "method": "bdev_raid_add_base_bdev", 00:18:01.069 "req_id": 1 00:18:01.069 } 00:18:01.069 Got JSON-RPC error response 00:18:01.069 response: 00:18:01.069 { 00:18:01.069 "code": -22, 00:18:01.069 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:01.069 } 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:01.069 07:49:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.007 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.267 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.267 "name": "raid_bdev1", 00:18:02.267 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:18:02.267 "strip_size_kb": 0, 00:18:02.267 "state": "online", 00:18:02.267 "raid_level": "raid1", 00:18:02.267 "superblock": true, 00:18:02.267 "num_base_bdevs": 2, 00:18:02.267 "num_base_bdevs_discovered": 1, 00:18:02.267 "num_base_bdevs_operational": 1, 00:18:02.267 "base_bdevs_list": [ 00:18:02.267 { 00:18:02.267 "name": null, 00:18:02.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.267 "is_configured": false, 00:18:02.267 "data_offset": 0, 00:18:02.267 "data_size": 7936 00:18:02.267 }, 00:18:02.267 { 00:18:02.267 "name": "BaseBdev2", 00:18:02.267 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:18:02.267 "is_configured": true, 00:18:02.267 "data_offset": 256, 00:18:02.267 "data_size": 7936 00:18:02.267 } 00:18:02.267 ] 00:18:02.267 }' 00:18:02.267 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.267 07:49:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.526 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.526 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.526 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.526 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.526 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.526 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.526 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.526 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.526 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.526 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.526 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.526 "name": "raid_bdev1", 00:18:02.526 "uuid": "84a578a5-fdce-4fbe-808c-5052e230c758", 00:18:02.526 "strip_size_kb": 0, 00:18:02.526 "state": "online", 00:18:02.526 "raid_level": "raid1", 00:18:02.526 "superblock": true, 00:18:02.526 "num_base_bdevs": 2, 00:18:02.526 "num_base_bdevs_discovered": 1, 00:18:02.526 "num_base_bdevs_operational": 1, 00:18:02.526 "base_bdevs_list": [ 00:18:02.526 { 00:18:02.526 "name": null, 00:18:02.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.526 "is_configured": false, 00:18:02.526 "data_offset": 0, 00:18:02.526 "data_size": 7936 00:18:02.526 }, 00:18:02.526 { 00:18:02.527 "name": "BaseBdev2", 00:18:02.527 "uuid": "3c5b50b6-89fb-5335-8cd5-b2ca82f3b7f0", 00:18:02.527 "is_configured": true, 00:18:02.527 "data_offset": 256, 00:18:02.527 "data_size": 7936 00:18:02.527 } 00:18:02.527 ] 00:18:02.527 }' 00:18:02.527 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.527 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.527 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.786 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.786 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86169 00:18:02.786 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86169 ']' 00:18:02.786 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86169 00:18:02.786 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:02.786 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.787 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86169 00:18:02.787 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.787 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.787 killing process with pid 86169 00:18:02.787 Received shutdown signal, test time was about 60.000000 seconds 00:18:02.787 00:18:02.787 Latency(us) 00:18:02.787 [2024-11-29T07:49:52.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.787 [2024-11-29T07:49:52.732Z] =================================================================================================================== 00:18:02.787 [2024-11-29T07:49:52.732Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:02.787 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86169' 00:18:02.787 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86169 00:18:02.787 [2024-11-29 07:49:52.518358] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:02.787 [2024-11-29 07:49:52.518474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.787 07:49:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86169 00:18:02.787 [2024-11-29 07:49:52.518522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.787 [2024-11-29 07:49:52.518535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:03.046 [2024-11-29 07:49:52.798981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.986 07:49:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:03.986 00:18:03.986 real 0m19.707s 00:18:03.986 user 0m25.789s 00:18:03.986 sys 0m2.663s 00:18:03.986 ************************************ 00:18:03.986 END TEST raid_rebuild_test_sb_4k 00:18:03.986 ************************************ 00:18:03.986 07:49:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.986 07:49:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.986 07:49:53 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:03.986 07:49:53 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:03.986 07:49:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:03.986 07:49:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.986 07:49:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.986 ************************************ 00:18:03.986 START TEST raid_state_function_test_sb_md_separate 00:18:03.986 ************************************ 00:18:03.986 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:03.986 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:03.986 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:03.986 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:03.986 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:03.986 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86859 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86859' 00:18:04.246 Process raid pid: 86859 00:18:04.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86859 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86859 ']' 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.246 07:49:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.246 [2024-11-29 07:49:54.029377] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:04.246 [2024-11-29 07:49:54.029602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.506 [2024-11-29 07:49:54.209080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.506 [2024-11-29 07:49:54.312679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.765 [2024-11-29 07:49:54.519185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.765 [2024-11-29 07:49:54.519303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.025 [2024-11-29 07:49:54.843540] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:05.025 [2024-11-29 07:49:54.843668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:05.025 [2024-11-29 07:49:54.843698] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.025 [2024-11-29 07:49:54.843720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.025 "name": "Existed_Raid", 00:18:05.025 "uuid": "99677e9f-3b74-4601-9a53-c92d296553e7", 00:18:05.025 "strip_size_kb": 0, 00:18:05.025 "state": "configuring", 00:18:05.025 "raid_level": "raid1", 00:18:05.025 "superblock": true, 00:18:05.025 "num_base_bdevs": 2, 00:18:05.025 "num_base_bdevs_discovered": 0, 00:18:05.025 "num_base_bdevs_operational": 2, 00:18:05.025 "base_bdevs_list": [ 00:18:05.025 { 00:18:05.025 "name": "BaseBdev1", 00:18:05.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.025 "is_configured": false, 00:18:05.025 "data_offset": 0, 00:18:05.025 "data_size": 0 00:18:05.025 }, 00:18:05.025 { 00:18:05.025 "name": "BaseBdev2", 00:18:05.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.025 "is_configured": false, 00:18:05.025 "data_offset": 0, 00:18:05.025 "data_size": 0 00:18:05.025 } 00:18:05.025 ] 00:18:05.025 }' 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.025 07:49:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.593 [2024-11-29 07:49:55.330637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:05.593 [2024-11-29 07:49:55.330716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.593 [2024-11-29 07:49:55.342618] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:05.593 [2024-11-29 07:49:55.342713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:05.593 [2024-11-29 07:49:55.342739] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.593 [2024-11-29 07:49:55.342763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.593 [2024-11-29 07:49:55.391993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.593 BaseBdev1 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.593 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.593 [ 00:18:05.593 { 00:18:05.593 "name": "BaseBdev1", 00:18:05.593 "aliases": [ 00:18:05.593 "880af73f-1605-4836-ad47-7ad0a5eabf02" 00:18:05.593 ], 00:18:05.593 "product_name": "Malloc disk", 00:18:05.593 "block_size": 4096, 00:18:05.593 "num_blocks": 8192, 00:18:05.593 "uuid": "880af73f-1605-4836-ad47-7ad0a5eabf02", 00:18:05.593 "md_size": 32, 00:18:05.593 "md_interleave": false, 00:18:05.593 "dif_type": 0, 00:18:05.593 "assigned_rate_limits": { 00:18:05.593 "rw_ios_per_sec": 0, 00:18:05.593 "rw_mbytes_per_sec": 0, 00:18:05.593 "r_mbytes_per_sec": 0, 00:18:05.593 "w_mbytes_per_sec": 0 00:18:05.593 }, 00:18:05.593 "claimed": true, 00:18:05.593 "claim_type": "exclusive_write", 00:18:05.594 "zoned": false, 00:18:05.594 "supported_io_types": { 00:18:05.594 "read": true, 00:18:05.594 "write": true, 00:18:05.594 "unmap": true, 00:18:05.594 "flush": true, 00:18:05.594 "reset": true, 00:18:05.594 "nvme_admin": false, 00:18:05.594 "nvme_io": false, 00:18:05.594 "nvme_io_md": false, 00:18:05.594 "write_zeroes": true, 00:18:05.594 "zcopy": true, 00:18:05.594 "get_zone_info": false, 00:18:05.594 "zone_management": false, 00:18:05.594 "zone_append": false, 00:18:05.594 "compare": false, 00:18:05.594 "compare_and_write": false, 00:18:05.594 "abort": true, 00:18:05.594 "seek_hole": false, 00:18:05.594 "seek_data": false, 00:18:05.594 "copy": true, 00:18:05.594 "nvme_iov_md": false 00:18:05.594 }, 00:18:05.594 "memory_domains": [ 00:18:05.594 { 00:18:05.594 "dma_device_id": "system", 00:18:05.594 "dma_device_type": 1 00:18:05.594 }, 00:18:05.594 { 00:18:05.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.594 "dma_device_type": 2 00:18:05.594 } 00:18:05.594 ], 00:18:05.594 "driver_specific": {} 00:18:05.594 } 00:18:05.594 ] 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.594 "name": "Existed_Raid", 00:18:05.594 "uuid": "43392776-1080-4cd5-a1a1-f708ed9de665", 00:18:05.594 "strip_size_kb": 0, 00:18:05.594 "state": "configuring", 00:18:05.594 "raid_level": "raid1", 00:18:05.594 "superblock": true, 00:18:05.594 "num_base_bdevs": 2, 00:18:05.594 "num_base_bdevs_discovered": 1, 00:18:05.594 "num_base_bdevs_operational": 2, 00:18:05.594 "base_bdevs_list": [ 00:18:05.594 { 00:18:05.594 "name": "BaseBdev1", 00:18:05.594 "uuid": "880af73f-1605-4836-ad47-7ad0a5eabf02", 00:18:05.594 "is_configured": true, 00:18:05.594 "data_offset": 256, 00:18:05.594 "data_size": 7936 00:18:05.594 }, 00:18:05.594 { 00:18:05.594 "name": "BaseBdev2", 00:18:05.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.594 "is_configured": false, 00:18:05.594 "data_offset": 0, 00:18:05.594 "data_size": 0 00:18:05.594 } 00:18:05.594 ] 00:18:05.594 }' 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.594 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.162 [2024-11-29 07:49:55.903247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:06.162 [2024-11-29 07:49:55.903344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.162 [2024-11-29 07:49:55.915269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.162 [2024-11-29 07:49:55.917166] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:06.162 [2024-11-29 07:49:55.917257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.162 "name": "Existed_Raid", 00:18:06.162 "uuid": "5c896e2f-2dbc-49b4-8d32-b3b26fb8eef0", 00:18:06.162 "strip_size_kb": 0, 00:18:06.162 "state": "configuring", 00:18:06.162 "raid_level": "raid1", 00:18:06.162 "superblock": true, 00:18:06.162 "num_base_bdevs": 2, 00:18:06.162 "num_base_bdevs_discovered": 1, 00:18:06.162 "num_base_bdevs_operational": 2, 00:18:06.162 "base_bdevs_list": [ 00:18:06.162 { 00:18:06.162 "name": "BaseBdev1", 00:18:06.162 "uuid": "880af73f-1605-4836-ad47-7ad0a5eabf02", 00:18:06.162 "is_configured": true, 00:18:06.162 "data_offset": 256, 00:18:06.162 "data_size": 7936 00:18:06.162 }, 00:18:06.162 { 00:18:06.162 "name": "BaseBdev2", 00:18:06.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.162 "is_configured": false, 00:18:06.162 "data_offset": 0, 00:18:06.162 "data_size": 0 00:18:06.162 } 00:18:06.162 ] 00:18:06.162 }' 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.162 07:49:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.730 [2024-11-29 07:49:56.486476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.730 [2024-11-29 07:49:56.486782] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:06.730 [2024-11-29 07:49:56.486842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:06.730 [2024-11-29 07:49:56.486956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:06.730 [2024-11-29 07:49:56.487135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:06.730 [2024-11-29 07:49:56.487188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:06.730 [2024-11-29 07:49:56.487316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.730 BaseBdev2 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.730 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.731 [ 00:18:06.731 { 00:18:06.731 "name": "BaseBdev2", 00:18:06.731 "aliases": [ 00:18:06.731 "ba630241-beab-49e8-8364-0e6128088d9a" 00:18:06.731 ], 00:18:06.731 "product_name": "Malloc disk", 00:18:06.731 "block_size": 4096, 00:18:06.731 "num_blocks": 8192, 00:18:06.731 "uuid": "ba630241-beab-49e8-8364-0e6128088d9a", 00:18:06.731 "md_size": 32, 00:18:06.731 "md_interleave": false, 00:18:06.731 "dif_type": 0, 00:18:06.731 "assigned_rate_limits": { 00:18:06.731 "rw_ios_per_sec": 0, 00:18:06.731 "rw_mbytes_per_sec": 0, 00:18:06.731 "r_mbytes_per_sec": 0, 00:18:06.731 "w_mbytes_per_sec": 0 00:18:06.731 }, 00:18:06.731 "claimed": true, 00:18:06.731 "claim_type": "exclusive_write", 00:18:06.731 "zoned": false, 00:18:06.731 "supported_io_types": { 00:18:06.731 "read": true, 00:18:06.731 "write": true, 00:18:06.731 "unmap": true, 00:18:06.731 "flush": true, 00:18:06.731 "reset": true, 00:18:06.731 "nvme_admin": false, 00:18:06.731 "nvme_io": false, 00:18:06.731 "nvme_io_md": false, 00:18:06.731 "write_zeroes": true, 00:18:06.731 "zcopy": true, 00:18:06.731 "get_zone_info": false, 00:18:06.731 "zone_management": false, 00:18:06.731 "zone_append": false, 00:18:06.731 "compare": false, 00:18:06.731 "compare_and_write": false, 00:18:06.731 "abort": true, 00:18:06.731 "seek_hole": false, 00:18:06.731 "seek_data": false, 00:18:06.731 "copy": true, 00:18:06.731 "nvme_iov_md": false 00:18:06.731 }, 00:18:06.731 "memory_domains": [ 00:18:06.731 { 00:18:06.731 "dma_device_id": "system", 00:18:06.731 "dma_device_type": 1 00:18:06.731 }, 00:18:06.731 { 00:18:06.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.731 "dma_device_type": 2 00:18:06.731 } 00:18:06.731 ], 00:18:06.731 "driver_specific": {} 00:18:06.731 } 00:18:06.731 ] 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.731 "name": "Existed_Raid", 00:18:06.731 "uuid": "5c896e2f-2dbc-49b4-8d32-b3b26fb8eef0", 00:18:06.731 "strip_size_kb": 0, 00:18:06.731 "state": "online", 00:18:06.731 "raid_level": "raid1", 00:18:06.731 "superblock": true, 00:18:06.731 "num_base_bdevs": 2, 00:18:06.731 "num_base_bdevs_discovered": 2, 00:18:06.731 "num_base_bdevs_operational": 2, 00:18:06.731 "base_bdevs_list": [ 00:18:06.731 { 00:18:06.731 "name": "BaseBdev1", 00:18:06.731 "uuid": "880af73f-1605-4836-ad47-7ad0a5eabf02", 00:18:06.731 "is_configured": true, 00:18:06.731 "data_offset": 256, 00:18:06.731 "data_size": 7936 00:18:06.731 }, 00:18:06.731 { 00:18:06.731 "name": "BaseBdev2", 00:18:06.731 "uuid": "ba630241-beab-49e8-8364-0e6128088d9a", 00:18:06.731 "is_configured": true, 00:18:06.731 "data_offset": 256, 00:18:06.731 "data_size": 7936 00:18:06.731 } 00:18:06.731 ] 00:18:06.731 }' 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.731 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.299 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:07.299 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:07.299 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:07.299 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:07.299 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:07.299 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:07.299 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:07.299 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:07.299 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.299 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.299 [2024-11-29 07:49:56.961941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.299 07:49:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.299 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:07.299 "name": "Existed_Raid", 00:18:07.299 "aliases": [ 00:18:07.299 "5c896e2f-2dbc-49b4-8d32-b3b26fb8eef0" 00:18:07.299 ], 00:18:07.299 "product_name": "Raid Volume", 00:18:07.299 "block_size": 4096, 00:18:07.299 "num_blocks": 7936, 00:18:07.299 "uuid": "5c896e2f-2dbc-49b4-8d32-b3b26fb8eef0", 00:18:07.299 "md_size": 32, 00:18:07.299 "md_interleave": false, 00:18:07.299 "dif_type": 0, 00:18:07.299 "assigned_rate_limits": { 00:18:07.299 "rw_ios_per_sec": 0, 00:18:07.299 "rw_mbytes_per_sec": 0, 00:18:07.299 "r_mbytes_per_sec": 0, 00:18:07.299 "w_mbytes_per_sec": 0 00:18:07.299 }, 00:18:07.299 "claimed": false, 00:18:07.299 "zoned": false, 00:18:07.299 "supported_io_types": { 00:18:07.299 "read": true, 00:18:07.299 "write": true, 00:18:07.299 "unmap": false, 00:18:07.299 "flush": false, 00:18:07.299 "reset": true, 00:18:07.299 "nvme_admin": false, 00:18:07.299 "nvme_io": false, 00:18:07.299 "nvme_io_md": false, 00:18:07.299 "write_zeroes": true, 00:18:07.299 "zcopy": false, 00:18:07.299 "get_zone_info": false, 00:18:07.299 "zone_management": false, 00:18:07.299 "zone_append": false, 00:18:07.299 "compare": false, 00:18:07.299 "compare_and_write": false, 00:18:07.299 "abort": false, 00:18:07.299 "seek_hole": false, 00:18:07.299 "seek_data": false, 00:18:07.299 "copy": false, 00:18:07.299 "nvme_iov_md": false 00:18:07.299 }, 00:18:07.299 "memory_domains": [ 00:18:07.299 { 00:18:07.299 "dma_device_id": "system", 00:18:07.299 "dma_device_type": 1 00:18:07.299 }, 00:18:07.299 { 00:18:07.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.299 "dma_device_type": 2 00:18:07.299 }, 00:18:07.299 { 00:18:07.299 "dma_device_id": "system", 00:18:07.299 "dma_device_type": 1 00:18:07.299 }, 00:18:07.299 { 00:18:07.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.299 "dma_device_type": 2 00:18:07.299 } 00:18:07.299 ], 00:18:07.299 "driver_specific": { 00:18:07.299 "raid": { 00:18:07.299 "uuid": "5c896e2f-2dbc-49b4-8d32-b3b26fb8eef0", 00:18:07.299 "strip_size_kb": 0, 00:18:07.299 "state": "online", 00:18:07.299 "raid_level": "raid1", 00:18:07.299 "superblock": true, 00:18:07.299 "num_base_bdevs": 2, 00:18:07.299 "num_base_bdevs_discovered": 2, 00:18:07.299 "num_base_bdevs_operational": 2, 00:18:07.299 "base_bdevs_list": [ 00:18:07.299 { 00:18:07.299 "name": "BaseBdev1", 00:18:07.299 "uuid": "880af73f-1605-4836-ad47-7ad0a5eabf02", 00:18:07.299 "is_configured": true, 00:18:07.299 "data_offset": 256, 00:18:07.299 "data_size": 7936 00:18:07.299 }, 00:18:07.300 { 00:18:07.300 "name": "BaseBdev2", 00:18:07.300 "uuid": "ba630241-beab-49e8-8364-0e6128088d9a", 00:18:07.300 "is_configured": true, 00:18:07.300 "data_offset": 256, 00:18:07.300 "data_size": 7936 00:18:07.300 } 00:18:07.300 ] 00:18:07.300 } 00:18:07.300 } 00:18:07.300 }' 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:07.300 BaseBdev2' 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.300 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.300 [2024-11-29 07:49:57.185341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.558 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.558 "name": "Existed_Raid", 00:18:07.558 "uuid": "5c896e2f-2dbc-49b4-8d32-b3b26fb8eef0", 00:18:07.558 "strip_size_kb": 0, 00:18:07.558 "state": "online", 00:18:07.558 "raid_level": "raid1", 00:18:07.558 "superblock": true, 00:18:07.558 "num_base_bdevs": 2, 00:18:07.558 "num_base_bdevs_discovered": 1, 00:18:07.558 "num_base_bdevs_operational": 1, 00:18:07.559 "base_bdevs_list": [ 00:18:07.559 { 00:18:07.559 "name": null, 00:18:07.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.559 "is_configured": false, 00:18:07.559 "data_offset": 0, 00:18:07.559 "data_size": 7936 00:18:07.559 }, 00:18:07.559 { 00:18:07.559 "name": "BaseBdev2", 00:18:07.559 "uuid": "ba630241-beab-49e8-8364-0e6128088d9a", 00:18:07.559 "is_configured": true, 00:18:07.559 "data_offset": 256, 00:18:07.559 "data_size": 7936 00:18:07.559 } 00:18:07.559 ] 00:18:07.559 }' 00:18:07.559 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.559 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.818 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:07.818 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:07.818 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.818 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.818 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.818 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:08.078 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.078 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.079 [2024-11-29 07:49:57.788168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:08.079 [2024-11-29 07:49:57.788334] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.079 [2024-11-29 07:49:57.884396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.079 [2024-11-29 07:49:57.884541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.079 [2024-11-29 07:49:57.884583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86859 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86859 ']' 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86859 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86859 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86859' 00:18:08.079 killing process with pid 86859 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86859 00:18:08.079 [2024-11-29 07:49:57.982432] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:08.079 07:49:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86859 00:18:08.079 [2024-11-29 07:49:57.998555] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:09.461 07:49:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:09.461 ************************************ 00:18:09.461 END TEST raid_state_function_test_sb_md_separate 00:18:09.461 ************************************ 00:18:09.461 00:18:09.461 real 0m5.134s 00:18:09.461 user 0m7.413s 00:18:09.461 sys 0m0.907s 00:18:09.461 07:49:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.461 07:49:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.461 07:49:59 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:09.461 07:49:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:09.461 07:49:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.461 07:49:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:09.461 ************************************ 00:18:09.461 START TEST raid_superblock_test_md_separate 00:18:09.461 ************************************ 00:18:09.461 07:49:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:09.461 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:09.461 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:09.461 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:09.461 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:09.461 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:09.461 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:09.461 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:09.461 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87106 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:09.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87106 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87106 ']' 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.462 07:49:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.462 [2024-11-29 07:49:59.243509] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:09.462 [2024-11-29 07:49:59.243734] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87106 ] 00:18:09.722 [2024-11-29 07:49:59.426153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.722 [2024-11-29 07:49:59.537441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.982 [2024-11-29 07:49:59.731547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.982 [2024-11-29 07:49:59.731677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.243 malloc1 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.243 [2024-11-29 07:50:00.091719] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:10.243 [2024-11-29 07:50:00.091783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.243 [2024-11-29 07:50:00.091803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:10.243 [2024-11-29 07:50:00.091812] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.243 [2024-11-29 07:50:00.093645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.243 [2024-11-29 07:50:00.093681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:10.243 pt1 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.243 malloc2 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.243 [2024-11-29 07:50:00.148029] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:10.243 [2024-11-29 07:50:00.148188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.243 [2024-11-29 07:50:00.148226] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:10.243 [2024-11-29 07:50:00.148253] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.243 [2024-11-29 07:50:00.150054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.243 [2024-11-29 07:50:00.150129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:10.243 pt2 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.243 [2024-11-29 07:50:00.160041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:10.243 [2024-11-29 07:50:00.161840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.243 [2024-11-29 07:50:00.162066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:10.243 [2024-11-29 07:50:00.162136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:10.243 [2024-11-29 07:50:00.162234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:10.243 [2024-11-29 07:50:00.162386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:10.243 [2024-11-29 07:50:00.162440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:10.243 [2024-11-29 07:50:00.162580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.243 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.502 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.502 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.502 "name": "raid_bdev1", 00:18:10.502 "uuid": "64a116ac-457e-4583-9e12-85ede7bd51bf", 00:18:10.502 "strip_size_kb": 0, 00:18:10.502 "state": "online", 00:18:10.502 "raid_level": "raid1", 00:18:10.502 "superblock": true, 00:18:10.502 "num_base_bdevs": 2, 00:18:10.502 "num_base_bdevs_discovered": 2, 00:18:10.502 "num_base_bdevs_operational": 2, 00:18:10.502 "base_bdevs_list": [ 00:18:10.502 { 00:18:10.502 "name": "pt1", 00:18:10.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.502 "is_configured": true, 00:18:10.502 "data_offset": 256, 00:18:10.502 "data_size": 7936 00:18:10.502 }, 00:18:10.502 { 00:18:10.502 "name": "pt2", 00:18:10.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.502 "is_configured": true, 00:18:10.502 "data_offset": 256, 00:18:10.502 "data_size": 7936 00:18:10.502 } 00:18:10.502 ] 00:18:10.502 }' 00:18:10.502 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.502 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:10.762 [2024-11-29 07:50:00.627584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:10.762 "name": "raid_bdev1", 00:18:10.762 "aliases": [ 00:18:10.762 "64a116ac-457e-4583-9e12-85ede7bd51bf" 00:18:10.762 ], 00:18:10.762 "product_name": "Raid Volume", 00:18:10.762 "block_size": 4096, 00:18:10.762 "num_blocks": 7936, 00:18:10.762 "uuid": "64a116ac-457e-4583-9e12-85ede7bd51bf", 00:18:10.762 "md_size": 32, 00:18:10.762 "md_interleave": false, 00:18:10.762 "dif_type": 0, 00:18:10.762 "assigned_rate_limits": { 00:18:10.762 "rw_ios_per_sec": 0, 00:18:10.762 "rw_mbytes_per_sec": 0, 00:18:10.762 "r_mbytes_per_sec": 0, 00:18:10.762 "w_mbytes_per_sec": 0 00:18:10.762 }, 00:18:10.762 "claimed": false, 00:18:10.762 "zoned": false, 00:18:10.762 "supported_io_types": { 00:18:10.762 "read": true, 00:18:10.762 "write": true, 00:18:10.762 "unmap": false, 00:18:10.762 "flush": false, 00:18:10.762 "reset": true, 00:18:10.762 "nvme_admin": false, 00:18:10.762 "nvme_io": false, 00:18:10.762 "nvme_io_md": false, 00:18:10.762 "write_zeroes": true, 00:18:10.762 "zcopy": false, 00:18:10.762 "get_zone_info": false, 00:18:10.762 "zone_management": false, 00:18:10.762 "zone_append": false, 00:18:10.762 "compare": false, 00:18:10.762 "compare_and_write": false, 00:18:10.762 "abort": false, 00:18:10.762 "seek_hole": false, 00:18:10.762 "seek_data": false, 00:18:10.762 "copy": false, 00:18:10.762 "nvme_iov_md": false 00:18:10.762 }, 00:18:10.762 "memory_domains": [ 00:18:10.762 { 00:18:10.762 "dma_device_id": "system", 00:18:10.762 "dma_device_type": 1 00:18:10.762 }, 00:18:10.762 { 00:18:10.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.762 "dma_device_type": 2 00:18:10.762 }, 00:18:10.762 { 00:18:10.762 "dma_device_id": "system", 00:18:10.762 "dma_device_type": 1 00:18:10.762 }, 00:18:10.762 { 00:18:10.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.762 "dma_device_type": 2 00:18:10.762 } 00:18:10.762 ], 00:18:10.762 "driver_specific": { 00:18:10.762 "raid": { 00:18:10.762 "uuid": "64a116ac-457e-4583-9e12-85ede7bd51bf", 00:18:10.762 "strip_size_kb": 0, 00:18:10.762 "state": "online", 00:18:10.762 "raid_level": "raid1", 00:18:10.762 "superblock": true, 00:18:10.762 "num_base_bdevs": 2, 00:18:10.762 "num_base_bdevs_discovered": 2, 00:18:10.762 "num_base_bdevs_operational": 2, 00:18:10.762 "base_bdevs_list": [ 00:18:10.762 { 00:18:10.762 "name": "pt1", 00:18:10.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.762 "is_configured": true, 00:18:10.762 "data_offset": 256, 00:18:10.762 "data_size": 7936 00:18:10.762 }, 00:18:10.762 { 00:18:10.762 "name": "pt2", 00:18:10.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.762 "is_configured": true, 00:18:10.762 "data_offset": 256, 00:18:10.762 "data_size": 7936 00:18:10.762 } 00:18:10.762 ] 00:18:10.762 } 00:18:10.762 } 00:18:10.762 }' 00:18:10.762 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:11.022 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:11.022 pt2' 00:18:11.022 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.022 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:11.022 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.023 [2024-11-29 07:50:00.851154] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=64a116ac-457e-4583-9e12-85ede7bd51bf 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 64a116ac-457e-4583-9e12-85ede7bd51bf ']' 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.023 [2024-11-29 07:50:00.894832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.023 [2024-11-29 07:50:00.894898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.023 [2024-11-29 07:50:00.894995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.023 [2024-11-29 07:50:00.895049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.023 [2024-11-29 07:50:00.895060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.023 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.283 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.283 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:11.283 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.283 07:50:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:11.283 07:50:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.283 [2024-11-29 07:50:01.034604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:11.283 [2024-11-29 07:50:01.036533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:11.283 [2024-11-29 07:50:01.036605] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:11.283 [2024-11-29 07:50:01.036655] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:11.283 [2024-11-29 07:50:01.036669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.283 [2024-11-29 07:50:01.036678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:11.283 request: 00:18:11.283 { 00:18:11.283 "name": "raid_bdev1", 00:18:11.283 "raid_level": "raid1", 00:18:11.283 "base_bdevs": [ 00:18:11.283 "malloc1", 00:18:11.283 "malloc2" 00:18:11.283 ], 00:18:11.283 "superblock": false, 00:18:11.283 "method": "bdev_raid_create", 00:18:11.283 "req_id": 1 00:18:11.283 } 00:18:11.283 Got JSON-RPC error response 00:18:11.283 response: 00:18:11.283 { 00:18:11.283 "code": -17, 00:18:11.283 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:11.283 } 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.283 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.283 [2024-11-29 07:50:01.090496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:11.283 [2024-11-29 07:50:01.090583] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.283 [2024-11-29 07:50:01.090612] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:11.283 [2024-11-29 07:50:01.090638] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.283 [2024-11-29 07:50:01.092525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.284 [2024-11-29 07:50:01.092612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:11.284 [2024-11-29 07:50:01.092672] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:11.284 [2024-11-29 07:50:01.092740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:11.284 pt1 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.284 "name": "raid_bdev1", 00:18:11.284 "uuid": "64a116ac-457e-4583-9e12-85ede7bd51bf", 00:18:11.284 "strip_size_kb": 0, 00:18:11.284 "state": "configuring", 00:18:11.284 "raid_level": "raid1", 00:18:11.284 "superblock": true, 00:18:11.284 "num_base_bdevs": 2, 00:18:11.284 "num_base_bdevs_discovered": 1, 00:18:11.284 "num_base_bdevs_operational": 2, 00:18:11.284 "base_bdevs_list": [ 00:18:11.284 { 00:18:11.284 "name": "pt1", 00:18:11.284 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.284 "is_configured": true, 00:18:11.284 "data_offset": 256, 00:18:11.284 "data_size": 7936 00:18:11.284 }, 00:18:11.284 { 00:18:11.284 "name": null, 00:18:11.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.284 "is_configured": false, 00:18:11.284 "data_offset": 256, 00:18:11.284 "data_size": 7936 00:18:11.284 } 00:18:11.284 ] 00:18:11.284 }' 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.284 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.855 [2024-11-29 07:50:01.517759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:11.855 [2024-11-29 07:50:01.517882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.855 [2024-11-29 07:50:01.517917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:11.855 [2024-11-29 07:50:01.517946] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.855 [2024-11-29 07:50:01.518130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.855 [2024-11-29 07:50:01.518195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:11.855 [2024-11-29 07:50:01.518261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:11.855 [2024-11-29 07:50:01.518306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.855 [2024-11-29 07:50:01.518451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:11.855 [2024-11-29 07:50:01.518489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:11.855 [2024-11-29 07:50:01.518576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:11.855 [2024-11-29 07:50:01.518717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:11.855 [2024-11-29 07:50:01.518752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:11.855 [2024-11-29 07:50:01.518881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.855 pt2 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.855 "name": "raid_bdev1", 00:18:11.855 "uuid": "64a116ac-457e-4583-9e12-85ede7bd51bf", 00:18:11.855 "strip_size_kb": 0, 00:18:11.855 "state": "online", 00:18:11.855 "raid_level": "raid1", 00:18:11.855 "superblock": true, 00:18:11.855 "num_base_bdevs": 2, 00:18:11.855 "num_base_bdevs_discovered": 2, 00:18:11.855 "num_base_bdevs_operational": 2, 00:18:11.855 "base_bdevs_list": [ 00:18:11.855 { 00:18:11.855 "name": "pt1", 00:18:11.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.855 "is_configured": true, 00:18:11.855 "data_offset": 256, 00:18:11.855 "data_size": 7936 00:18:11.855 }, 00:18:11.855 { 00:18:11.855 "name": "pt2", 00:18:11.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.855 "is_configured": true, 00:18:11.855 "data_offset": 256, 00:18:11.855 "data_size": 7936 00:18:11.855 } 00:18:11.855 ] 00:18:11.855 }' 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.855 07:50:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.115 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:12.115 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:12.115 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:12.115 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:12.115 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:12.115 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:12.115 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:12.115 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:12.115 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.115 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.115 [2024-11-29 07:50:02.013215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.115 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.115 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:12.115 "name": "raid_bdev1", 00:18:12.115 "aliases": [ 00:18:12.115 "64a116ac-457e-4583-9e12-85ede7bd51bf" 00:18:12.115 ], 00:18:12.115 "product_name": "Raid Volume", 00:18:12.115 "block_size": 4096, 00:18:12.115 "num_blocks": 7936, 00:18:12.115 "uuid": "64a116ac-457e-4583-9e12-85ede7bd51bf", 00:18:12.115 "md_size": 32, 00:18:12.115 "md_interleave": false, 00:18:12.115 "dif_type": 0, 00:18:12.115 "assigned_rate_limits": { 00:18:12.115 "rw_ios_per_sec": 0, 00:18:12.115 "rw_mbytes_per_sec": 0, 00:18:12.115 "r_mbytes_per_sec": 0, 00:18:12.115 "w_mbytes_per_sec": 0 00:18:12.115 }, 00:18:12.115 "claimed": false, 00:18:12.115 "zoned": false, 00:18:12.115 "supported_io_types": { 00:18:12.115 "read": true, 00:18:12.115 "write": true, 00:18:12.115 "unmap": false, 00:18:12.115 "flush": false, 00:18:12.115 "reset": true, 00:18:12.115 "nvme_admin": false, 00:18:12.115 "nvme_io": false, 00:18:12.115 "nvme_io_md": false, 00:18:12.115 "write_zeroes": true, 00:18:12.115 "zcopy": false, 00:18:12.115 "get_zone_info": false, 00:18:12.115 "zone_management": false, 00:18:12.115 "zone_append": false, 00:18:12.115 "compare": false, 00:18:12.115 "compare_and_write": false, 00:18:12.115 "abort": false, 00:18:12.115 "seek_hole": false, 00:18:12.115 "seek_data": false, 00:18:12.115 "copy": false, 00:18:12.115 "nvme_iov_md": false 00:18:12.115 }, 00:18:12.115 "memory_domains": [ 00:18:12.115 { 00:18:12.115 "dma_device_id": "system", 00:18:12.115 "dma_device_type": 1 00:18:12.115 }, 00:18:12.115 { 00:18:12.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.115 "dma_device_type": 2 00:18:12.115 }, 00:18:12.115 { 00:18:12.115 "dma_device_id": "system", 00:18:12.115 "dma_device_type": 1 00:18:12.115 }, 00:18:12.115 { 00:18:12.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.115 "dma_device_type": 2 00:18:12.115 } 00:18:12.115 ], 00:18:12.115 "driver_specific": { 00:18:12.115 "raid": { 00:18:12.115 "uuid": "64a116ac-457e-4583-9e12-85ede7bd51bf", 00:18:12.115 "strip_size_kb": 0, 00:18:12.115 "state": "online", 00:18:12.115 "raid_level": "raid1", 00:18:12.115 "superblock": true, 00:18:12.115 "num_base_bdevs": 2, 00:18:12.116 "num_base_bdevs_discovered": 2, 00:18:12.116 "num_base_bdevs_operational": 2, 00:18:12.116 "base_bdevs_list": [ 00:18:12.116 { 00:18:12.116 "name": "pt1", 00:18:12.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.116 "is_configured": true, 00:18:12.116 "data_offset": 256, 00:18:12.116 "data_size": 7936 00:18:12.116 }, 00:18:12.116 { 00:18:12.116 "name": "pt2", 00:18:12.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.116 "is_configured": true, 00:18:12.116 "data_offset": 256, 00:18:12.116 "data_size": 7936 00:18:12.116 } 00:18:12.116 ] 00:18:12.116 } 00:18:12.116 } 00:18:12.116 }' 00:18:12.116 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:12.376 pt2' 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:12.376 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:12.377 [2024-11-29 07:50:02.208846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 64a116ac-457e-4583-9e12-85ede7bd51bf '!=' 64a116ac-457e-4583-9e12-85ede7bd51bf ']' 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.377 [2024-11-29 07:50:02.256545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.377 "name": "raid_bdev1", 00:18:12.377 "uuid": "64a116ac-457e-4583-9e12-85ede7bd51bf", 00:18:12.377 "strip_size_kb": 0, 00:18:12.377 "state": "online", 00:18:12.377 "raid_level": "raid1", 00:18:12.377 "superblock": true, 00:18:12.377 "num_base_bdevs": 2, 00:18:12.377 "num_base_bdevs_discovered": 1, 00:18:12.377 "num_base_bdevs_operational": 1, 00:18:12.377 "base_bdevs_list": [ 00:18:12.377 { 00:18:12.377 "name": null, 00:18:12.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.377 "is_configured": false, 00:18:12.377 "data_offset": 0, 00:18:12.377 "data_size": 7936 00:18:12.377 }, 00:18:12.377 { 00:18:12.377 "name": "pt2", 00:18:12.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.377 "is_configured": true, 00:18:12.377 "data_offset": 256, 00:18:12.377 "data_size": 7936 00:18:12.377 } 00:18:12.377 ] 00:18:12.377 }' 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.377 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.946 [2024-11-29 07:50:02.683814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.946 [2024-11-29 07:50:02.683889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.946 [2024-11-29 07:50:02.683981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.946 [2024-11-29 07:50:02.684023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.946 [2024-11-29 07:50:02.684034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.946 [2024-11-29 07:50:02.759679] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:12.946 [2024-11-29 07:50:02.759770] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.946 [2024-11-29 07:50:02.759802] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:12.946 [2024-11-29 07:50:02.759846] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.946 [2024-11-29 07:50:02.761762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.946 [2024-11-29 07:50:02.761850] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:12.946 [2024-11-29 07:50:02.761914] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:12.946 [2024-11-29 07:50:02.761995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:12.946 [2024-11-29 07:50:02.762101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:12.946 [2024-11-29 07:50:02.762149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:12.946 [2024-11-29 07:50:02.762240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:12.946 [2024-11-29 07:50:02.762386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:12.946 [2024-11-29 07:50:02.762422] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:12.946 [2024-11-29 07:50:02.762541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.946 pt2 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.946 "name": "raid_bdev1", 00:18:12.946 "uuid": "64a116ac-457e-4583-9e12-85ede7bd51bf", 00:18:12.946 "strip_size_kb": 0, 00:18:12.946 "state": "online", 00:18:12.946 "raid_level": "raid1", 00:18:12.946 "superblock": true, 00:18:12.946 "num_base_bdevs": 2, 00:18:12.946 "num_base_bdevs_discovered": 1, 00:18:12.946 "num_base_bdevs_operational": 1, 00:18:12.946 "base_bdevs_list": [ 00:18:12.946 { 00:18:12.946 "name": null, 00:18:12.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.946 "is_configured": false, 00:18:12.946 "data_offset": 256, 00:18:12.946 "data_size": 7936 00:18:12.946 }, 00:18:12.946 { 00:18:12.946 "name": "pt2", 00:18:12.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.946 "is_configured": true, 00:18:12.946 "data_offset": 256, 00:18:12.946 "data_size": 7936 00:18:12.946 } 00:18:12.946 ] 00:18:12.946 }' 00:18:12.946 07:50:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.947 07:50:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.515 [2024-11-29 07:50:03.190943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.515 [2024-11-29 07:50:03.191018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.515 [2024-11-29 07:50:03.191082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.515 [2024-11-29 07:50:03.191165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.515 [2024-11-29 07:50:03.191210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.515 [2024-11-29 07:50:03.238896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.515 [2024-11-29 07:50:03.238996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.515 [2024-11-29 07:50:03.239017] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:13.515 [2024-11-29 07:50:03.239025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.515 [2024-11-29 07:50:03.240866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.515 [2024-11-29 07:50:03.240903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:13.515 [2024-11-29 07:50:03.240947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:13.515 [2024-11-29 07:50:03.240982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:13.515 [2024-11-29 07:50:03.241115] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:13.515 [2024-11-29 07:50:03.241126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.515 [2024-11-29 07:50:03.241141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:13.515 [2024-11-29 07:50:03.241208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:13.515 [2024-11-29 07:50:03.241284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:13.515 [2024-11-29 07:50:03.241291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:13.515 [2024-11-29 07:50:03.241344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:13.515 [2024-11-29 07:50:03.241458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:13.515 [2024-11-29 07:50:03.241467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:13.515 [2024-11-29 07:50:03.241560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.515 pt1 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.515 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.516 "name": "raid_bdev1", 00:18:13.516 "uuid": "64a116ac-457e-4583-9e12-85ede7bd51bf", 00:18:13.516 "strip_size_kb": 0, 00:18:13.516 "state": "online", 00:18:13.516 "raid_level": "raid1", 00:18:13.516 "superblock": true, 00:18:13.516 "num_base_bdevs": 2, 00:18:13.516 "num_base_bdevs_discovered": 1, 00:18:13.516 "num_base_bdevs_operational": 1, 00:18:13.516 "base_bdevs_list": [ 00:18:13.516 { 00:18:13.516 "name": null, 00:18:13.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.516 "is_configured": false, 00:18:13.516 "data_offset": 256, 00:18:13.516 "data_size": 7936 00:18:13.516 }, 00:18:13.516 { 00:18:13.516 "name": "pt2", 00:18:13.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.516 "is_configured": true, 00:18:13.516 "data_offset": 256, 00:18:13.516 "data_size": 7936 00:18:13.516 } 00:18:13.516 ] 00:18:13.516 }' 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.516 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.775 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:13.775 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.775 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.775 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:13.775 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.775 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:13.775 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:13.775 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.775 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.775 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.775 [2024-11-29 07:50:03.702290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.036 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.036 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 64a116ac-457e-4583-9e12-85ede7bd51bf '!=' 64a116ac-457e-4583-9e12-85ede7bd51bf ']' 00:18:14.036 07:50:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87106 00:18:14.036 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87106 ']' 00:18:14.036 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87106 00:18:14.036 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:14.036 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.036 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87106 00:18:14.036 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.036 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.036 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87106' 00:18:14.036 killing process with pid 87106 00:18:14.036 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87106 00:18:14.036 [2024-11-29 07:50:03.763124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:14.036 [2024-11-29 07:50:03.763235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.036 [2024-11-29 07:50:03.763301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.036 [2024-11-29 07:50:03.763353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, sta 07:50:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87106 00:18:14.036 te offline 00:18:14.036 [2024-11-29 07:50:03.973694] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:15.416 07:50:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:15.416 00:18:15.416 real 0m5.892s 00:18:15.416 user 0m8.836s 00:18:15.416 sys 0m1.142s 00:18:15.416 07:50:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.416 07:50:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.416 ************************************ 00:18:15.416 END TEST raid_superblock_test_md_separate 00:18:15.416 ************************************ 00:18:15.416 07:50:05 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:15.416 07:50:05 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:15.416 07:50:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:15.416 07:50:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.416 07:50:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.416 ************************************ 00:18:15.416 START TEST raid_rebuild_test_sb_md_separate 00:18:15.416 ************************************ 00:18:15.416 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:15.416 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87434 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87434 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87434 ']' 00:18:15.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.417 07:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.417 [2024-11-29 07:50:05.218342] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:15.417 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:15.417 Zero copy mechanism will not be used. 00:18:15.417 [2024-11-29 07:50:05.218557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87434 ] 00:18:15.676 [2024-11-29 07:50:05.392639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.676 [2024-11-29 07:50:05.499519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.937 [2024-11-29 07:50:05.695648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.937 [2024-11-29 07:50:05.695769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.198 BaseBdev1_malloc 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.198 [2024-11-29 07:50:06.070254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:16.198 [2024-11-29 07:50:06.070415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.198 [2024-11-29 07:50:06.070456] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:16.198 [2024-11-29 07:50:06.070488] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.198 [2024-11-29 07:50:06.072322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.198 [2024-11-29 07:50:06.072397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:16.198 BaseBdev1 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.198 BaseBdev2_malloc 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.198 [2024-11-29 07:50:06.123070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:16.198 [2024-11-29 07:50:06.123215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.198 [2024-11-29 07:50:06.123257] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:16.198 [2024-11-29 07:50:06.123270] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.198 [2024-11-29 07:50:06.125027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.198 [2024-11-29 07:50:06.125057] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:16.198 BaseBdev2 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.198 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.459 spare_malloc 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.460 spare_delay 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.460 [2024-11-29 07:50:06.197501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:16.460 [2024-11-29 07:50:06.197636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.460 [2024-11-29 07:50:06.197676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:16.460 [2024-11-29 07:50:06.197716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.460 [2024-11-29 07:50:06.199635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.460 [2024-11-29 07:50:06.199719] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:16.460 spare 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.460 [2024-11-29 07:50:06.209518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.460 [2024-11-29 07:50:06.211289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.460 [2024-11-29 07:50:06.211497] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:16.460 [2024-11-29 07:50:06.211516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:16.460 [2024-11-29 07:50:06.211586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:16.460 [2024-11-29 07:50:06.211693] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:16.460 [2024-11-29 07:50:06.211702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:16.460 [2024-11-29 07:50:06.211788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.460 "name": "raid_bdev1", 00:18:16.460 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:16.460 "strip_size_kb": 0, 00:18:16.460 "state": "online", 00:18:16.460 "raid_level": "raid1", 00:18:16.460 "superblock": true, 00:18:16.460 "num_base_bdevs": 2, 00:18:16.460 "num_base_bdevs_discovered": 2, 00:18:16.460 "num_base_bdevs_operational": 2, 00:18:16.460 "base_bdevs_list": [ 00:18:16.460 { 00:18:16.460 "name": "BaseBdev1", 00:18:16.460 "uuid": "ffd6dafb-98d4-5d5d-9563-d2073e421745", 00:18:16.460 "is_configured": true, 00:18:16.460 "data_offset": 256, 00:18:16.460 "data_size": 7936 00:18:16.460 }, 00:18:16.460 { 00:18:16.460 "name": "BaseBdev2", 00:18:16.460 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:16.460 "is_configured": true, 00:18:16.460 "data_offset": 256, 00:18:16.460 "data_size": 7936 00:18:16.460 } 00:18:16.460 ] 00:18:16.460 }' 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.460 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.721 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:16.721 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.721 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.721 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:16.721 [2024-11-29 07:50:06.649095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:16.721 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.982 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:16.982 [2024-11-29 07:50:06.900505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:16.982 /dev/nbd0 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.242 1+0 records in 00:18:17.242 1+0 records out 00:18:17.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499315 s, 8.2 MB/s 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:17.242 07:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:17.813 7936+0 records in 00:18:17.813 7936+0 records out 00:18:17.813 32505856 bytes (33 MB, 31 MiB) copied, 0.645325 s, 50.4 MB/s 00:18:17.813 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:17.813 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:17.813 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:17.813 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:17.813 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:17.813 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.813 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:18.074 [2024-11-29 07:50:07.824202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.074 [2024-11-29 07:50:07.844250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.074 "name": "raid_bdev1", 00:18:18.074 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:18.074 "strip_size_kb": 0, 00:18:18.074 "state": "online", 00:18:18.074 "raid_level": "raid1", 00:18:18.074 "superblock": true, 00:18:18.074 "num_base_bdevs": 2, 00:18:18.074 "num_base_bdevs_discovered": 1, 00:18:18.074 "num_base_bdevs_operational": 1, 00:18:18.074 "base_bdevs_list": [ 00:18:18.074 { 00:18:18.074 "name": null, 00:18:18.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.074 "is_configured": false, 00:18:18.074 "data_offset": 0, 00:18:18.074 "data_size": 7936 00:18:18.074 }, 00:18:18.074 { 00:18:18.074 "name": "BaseBdev2", 00:18:18.074 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:18.074 "is_configured": true, 00:18:18.074 "data_offset": 256, 00:18:18.074 "data_size": 7936 00:18:18.074 } 00:18:18.074 ] 00:18:18.074 }' 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.074 07:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.645 07:50:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:18.645 07:50:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.645 07:50:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.645 [2024-11-29 07:50:08.307507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.645 [2024-11-29 07:50:08.321087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:18.645 07:50:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.645 07:50:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:18.645 [2024-11-29 07:50:08.322911] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:19.585 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.585 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.585 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.585 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.585 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.585 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.585 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.585 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.585 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.585 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.585 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.585 "name": "raid_bdev1", 00:18:19.586 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:19.586 "strip_size_kb": 0, 00:18:19.586 "state": "online", 00:18:19.586 "raid_level": "raid1", 00:18:19.586 "superblock": true, 00:18:19.586 "num_base_bdevs": 2, 00:18:19.586 "num_base_bdevs_discovered": 2, 00:18:19.586 "num_base_bdevs_operational": 2, 00:18:19.586 "process": { 00:18:19.586 "type": "rebuild", 00:18:19.586 "target": "spare", 00:18:19.586 "progress": { 00:18:19.586 "blocks": 2560, 00:18:19.586 "percent": 32 00:18:19.586 } 00:18:19.586 }, 00:18:19.586 "base_bdevs_list": [ 00:18:19.586 { 00:18:19.586 "name": "spare", 00:18:19.586 "uuid": "de72383c-18f9-54a1-8833-4834d8c05630", 00:18:19.586 "is_configured": true, 00:18:19.586 "data_offset": 256, 00:18:19.586 "data_size": 7936 00:18:19.586 }, 00:18:19.586 { 00:18:19.586 "name": "BaseBdev2", 00:18:19.586 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:19.586 "is_configured": true, 00:18:19.586 "data_offset": 256, 00:18:19.586 "data_size": 7936 00:18:19.586 } 00:18:19.586 ] 00:18:19.586 }' 00:18:19.586 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.586 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.586 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.586 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.586 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:19.586 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.586 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.586 [2024-11-29 07:50:09.475639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.586 [2024-11-29 07:50:09.527549] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:19.586 [2024-11-29 07:50:09.527682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.586 [2024-11-29 07:50:09.527717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.586 [2024-11-29 07:50:09.527741] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.846 "name": "raid_bdev1", 00:18:19.846 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:19.846 "strip_size_kb": 0, 00:18:19.846 "state": "online", 00:18:19.846 "raid_level": "raid1", 00:18:19.846 "superblock": true, 00:18:19.846 "num_base_bdevs": 2, 00:18:19.846 "num_base_bdevs_discovered": 1, 00:18:19.846 "num_base_bdevs_operational": 1, 00:18:19.846 "base_bdevs_list": [ 00:18:19.846 { 00:18:19.846 "name": null, 00:18:19.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.846 "is_configured": false, 00:18:19.846 "data_offset": 0, 00:18:19.846 "data_size": 7936 00:18:19.846 }, 00:18:19.846 { 00:18:19.846 "name": "BaseBdev2", 00:18:19.846 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:19.846 "is_configured": true, 00:18:19.846 "data_offset": 256, 00:18:19.846 "data_size": 7936 00:18:19.846 } 00:18:19.846 ] 00:18:19.846 }' 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.846 07:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.106 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.106 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.106 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.106 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.106 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.106 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.106 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.106 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.106 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.106 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.366 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.366 "name": "raid_bdev1", 00:18:20.366 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:20.366 "strip_size_kb": 0, 00:18:20.366 "state": "online", 00:18:20.366 "raid_level": "raid1", 00:18:20.366 "superblock": true, 00:18:20.366 "num_base_bdevs": 2, 00:18:20.366 "num_base_bdevs_discovered": 1, 00:18:20.366 "num_base_bdevs_operational": 1, 00:18:20.366 "base_bdevs_list": [ 00:18:20.366 { 00:18:20.366 "name": null, 00:18:20.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.366 "is_configured": false, 00:18:20.366 "data_offset": 0, 00:18:20.366 "data_size": 7936 00:18:20.366 }, 00:18:20.366 { 00:18:20.366 "name": "BaseBdev2", 00:18:20.366 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:20.366 "is_configured": true, 00:18:20.366 "data_offset": 256, 00:18:20.367 "data_size": 7936 00:18:20.367 } 00:18:20.367 ] 00:18:20.367 }' 00:18:20.367 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.367 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.367 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.367 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.367 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.367 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.367 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.367 [2024-11-29 07:50:10.133985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.367 [2024-11-29 07:50:10.146797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:20.367 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.367 07:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:20.367 [2024-11-29 07:50:10.148565] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.306 "name": "raid_bdev1", 00:18:21.306 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:21.306 "strip_size_kb": 0, 00:18:21.306 "state": "online", 00:18:21.306 "raid_level": "raid1", 00:18:21.306 "superblock": true, 00:18:21.306 "num_base_bdevs": 2, 00:18:21.306 "num_base_bdevs_discovered": 2, 00:18:21.306 "num_base_bdevs_operational": 2, 00:18:21.306 "process": { 00:18:21.306 "type": "rebuild", 00:18:21.306 "target": "spare", 00:18:21.306 "progress": { 00:18:21.306 "blocks": 2560, 00:18:21.306 "percent": 32 00:18:21.306 } 00:18:21.306 }, 00:18:21.306 "base_bdevs_list": [ 00:18:21.306 { 00:18:21.306 "name": "spare", 00:18:21.306 "uuid": "de72383c-18f9-54a1-8833-4834d8c05630", 00:18:21.306 "is_configured": true, 00:18:21.306 "data_offset": 256, 00:18:21.306 "data_size": 7936 00:18:21.306 }, 00:18:21.306 { 00:18:21.306 "name": "BaseBdev2", 00:18:21.306 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:21.306 "is_configured": true, 00:18:21.306 "data_offset": 256, 00:18:21.306 "data_size": 7936 00:18:21.306 } 00:18:21.306 ] 00:18:21.306 }' 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.306 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:21.566 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=690 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.566 "name": "raid_bdev1", 00:18:21.566 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:21.566 "strip_size_kb": 0, 00:18:21.566 "state": "online", 00:18:21.566 "raid_level": "raid1", 00:18:21.566 "superblock": true, 00:18:21.566 "num_base_bdevs": 2, 00:18:21.566 "num_base_bdevs_discovered": 2, 00:18:21.566 "num_base_bdevs_operational": 2, 00:18:21.566 "process": { 00:18:21.566 "type": "rebuild", 00:18:21.566 "target": "spare", 00:18:21.566 "progress": { 00:18:21.566 "blocks": 2816, 00:18:21.566 "percent": 35 00:18:21.566 } 00:18:21.566 }, 00:18:21.566 "base_bdevs_list": [ 00:18:21.566 { 00:18:21.566 "name": "spare", 00:18:21.566 "uuid": "de72383c-18f9-54a1-8833-4834d8c05630", 00:18:21.566 "is_configured": true, 00:18:21.566 "data_offset": 256, 00:18:21.566 "data_size": 7936 00:18:21.566 }, 00:18:21.566 { 00:18:21.566 "name": "BaseBdev2", 00:18:21.566 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:21.566 "is_configured": true, 00:18:21.566 "data_offset": 256, 00:18:21.566 "data_size": 7936 00:18:21.566 } 00:18:21.566 ] 00:18:21.566 }' 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.566 07:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:22.507 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.507 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.507 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.507 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.507 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.507 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.507 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.507 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.507 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.507 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.507 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.766 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.766 "name": "raid_bdev1", 00:18:22.766 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:22.766 "strip_size_kb": 0, 00:18:22.766 "state": "online", 00:18:22.766 "raid_level": "raid1", 00:18:22.766 "superblock": true, 00:18:22.766 "num_base_bdevs": 2, 00:18:22.766 "num_base_bdevs_discovered": 2, 00:18:22.766 "num_base_bdevs_operational": 2, 00:18:22.766 "process": { 00:18:22.766 "type": "rebuild", 00:18:22.766 "target": "spare", 00:18:22.766 "progress": { 00:18:22.766 "blocks": 5632, 00:18:22.766 "percent": 70 00:18:22.766 } 00:18:22.766 }, 00:18:22.766 "base_bdevs_list": [ 00:18:22.766 { 00:18:22.766 "name": "spare", 00:18:22.766 "uuid": "de72383c-18f9-54a1-8833-4834d8c05630", 00:18:22.766 "is_configured": true, 00:18:22.766 "data_offset": 256, 00:18:22.766 "data_size": 7936 00:18:22.766 }, 00:18:22.766 { 00:18:22.766 "name": "BaseBdev2", 00:18:22.766 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:22.766 "is_configured": true, 00:18:22.766 "data_offset": 256, 00:18:22.766 "data_size": 7936 00:18:22.766 } 00:18:22.766 ] 00:18:22.766 }' 00:18:22.766 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.766 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.766 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.766 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.766 07:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:23.337 [2024-11-29 07:50:13.260354] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:23.337 [2024-11-29 07:50:13.260474] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:23.337 [2024-11-29 07:50:13.260601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.907 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.907 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.907 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.907 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.907 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.907 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.907 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.907 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.907 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.907 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.907 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.907 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.907 "name": "raid_bdev1", 00:18:23.907 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:23.907 "strip_size_kb": 0, 00:18:23.907 "state": "online", 00:18:23.907 "raid_level": "raid1", 00:18:23.907 "superblock": true, 00:18:23.907 "num_base_bdevs": 2, 00:18:23.907 "num_base_bdevs_discovered": 2, 00:18:23.907 "num_base_bdevs_operational": 2, 00:18:23.907 "base_bdevs_list": [ 00:18:23.907 { 00:18:23.907 "name": "spare", 00:18:23.907 "uuid": "de72383c-18f9-54a1-8833-4834d8c05630", 00:18:23.907 "is_configured": true, 00:18:23.907 "data_offset": 256, 00:18:23.907 "data_size": 7936 00:18:23.907 }, 00:18:23.907 { 00:18:23.908 "name": "BaseBdev2", 00:18:23.908 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:23.908 "is_configured": true, 00:18:23.908 "data_offset": 256, 00:18:23.908 "data_size": 7936 00:18:23.908 } 00:18:23.908 ] 00:18:23.908 }' 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.908 "name": "raid_bdev1", 00:18:23.908 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:23.908 "strip_size_kb": 0, 00:18:23.908 "state": "online", 00:18:23.908 "raid_level": "raid1", 00:18:23.908 "superblock": true, 00:18:23.908 "num_base_bdevs": 2, 00:18:23.908 "num_base_bdevs_discovered": 2, 00:18:23.908 "num_base_bdevs_operational": 2, 00:18:23.908 "base_bdevs_list": [ 00:18:23.908 { 00:18:23.908 "name": "spare", 00:18:23.908 "uuid": "de72383c-18f9-54a1-8833-4834d8c05630", 00:18:23.908 "is_configured": true, 00:18:23.908 "data_offset": 256, 00:18:23.908 "data_size": 7936 00:18:23.908 }, 00:18:23.908 { 00:18:23.908 "name": "BaseBdev2", 00:18:23.908 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:23.908 "is_configured": true, 00:18:23.908 "data_offset": 256, 00:18:23.908 "data_size": 7936 00:18:23.908 } 00:18:23.908 ] 00:18:23.908 }' 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.908 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.168 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.168 "name": "raid_bdev1", 00:18:24.168 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:24.168 "strip_size_kb": 0, 00:18:24.168 "state": "online", 00:18:24.168 "raid_level": "raid1", 00:18:24.168 "superblock": true, 00:18:24.168 "num_base_bdevs": 2, 00:18:24.168 "num_base_bdevs_discovered": 2, 00:18:24.168 "num_base_bdevs_operational": 2, 00:18:24.168 "base_bdevs_list": [ 00:18:24.168 { 00:18:24.168 "name": "spare", 00:18:24.168 "uuid": "de72383c-18f9-54a1-8833-4834d8c05630", 00:18:24.168 "is_configured": true, 00:18:24.168 "data_offset": 256, 00:18:24.168 "data_size": 7936 00:18:24.168 }, 00:18:24.168 { 00:18:24.168 "name": "BaseBdev2", 00:18:24.168 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:24.168 "is_configured": true, 00:18:24.168 "data_offset": 256, 00:18:24.168 "data_size": 7936 00:18:24.168 } 00:18:24.168 ] 00:18:24.168 }' 00:18:24.168 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.168 07:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.429 [2024-11-29 07:50:14.294022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.429 [2024-11-29 07:50:14.294130] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.429 [2024-11-29 07:50:14.294226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.429 [2024-11-29 07:50:14.294302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.429 [2024-11-29 07:50:14.294354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.429 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:24.690 /dev/nbd0 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.690 1+0 records in 00:18:24.690 1+0 records out 00:18:24.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397244 s, 10.3 MB/s 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.690 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:24.950 /dev/nbd1 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.950 1+0 records in 00:18:24.950 1+0 records out 00:18:24.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346295 s, 11.8 MB/s 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.950 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:25.210 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:25.210 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:25.210 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:25.210 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.210 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:25.210 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.210 07:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.470 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.730 [2024-11-29 07:50:15.427252] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:25.730 [2024-11-29 07:50:15.427366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.730 [2024-11-29 07:50:15.427407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:25.730 [2024-11-29 07:50:15.427436] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.730 [2024-11-29 07:50:15.429390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.730 [2024-11-29 07:50:15.429479] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:25.730 [2024-11-29 07:50:15.429585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:25.730 [2024-11-29 07:50:15.429661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.730 [2024-11-29 07:50:15.429839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.730 spare 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.730 [2024-11-29 07:50:15.529762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:25.730 [2024-11-29 07:50:15.529836] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:25.730 [2024-11-29 07:50:15.529927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:25.730 [2024-11-29 07:50:15.530066] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:25.730 [2024-11-29 07:50:15.530075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:25.730 [2024-11-29 07:50:15.530209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.730 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.731 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.731 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.731 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.731 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.731 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.731 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.731 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.731 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.731 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.731 "name": "raid_bdev1", 00:18:25.731 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:25.731 "strip_size_kb": 0, 00:18:25.731 "state": "online", 00:18:25.731 "raid_level": "raid1", 00:18:25.731 "superblock": true, 00:18:25.731 "num_base_bdevs": 2, 00:18:25.731 "num_base_bdevs_discovered": 2, 00:18:25.731 "num_base_bdevs_operational": 2, 00:18:25.731 "base_bdevs_list": [ 00:18:25.731 { 00:18:25.731 "name": "spare", 00:18:25.731 "uuid": "de72383c-18f9-54a1-8833-4834d8c05630", 00:18:25.731 "is_configured": true, 00:18:25.731 "data_offset": 256, 00:18:25.731 "data_size": 7936 00:18:25.731 }, 00:18:25.731 { 00:18:25.731 "name": "BaseBdev2", 00:18:25.731 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:25.731 "is_configured": true, 00:18:25.731 "data_offset": 256, 00:18:25.731 "data_size": 7936 00:18:25.731 } 00:18:25.731 ] 00:18:25.731 }' 00:18:25.731 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.731 07:50:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.300 "name": "raid_bdev1", 00:18:26.300 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:26.300 "strip_size_kb": 0, 00:18:26.300 "state": "online", 00:18:26.300 "raid_level": "raid1", 00:18:26.300 "superblock": true, 00:18:26.300 "num_base_bdevs": 2, 00:18:26.300 "num_base_bdevs_discovered": 2, 00:18:26.300 "num_base_bdevs_operational": 2, 00:18:26.300 "base_bdevs_list": [ 00:18:26.300 { 00:18:26.300 "name": "spare", 00:18:26.300 "uuid": "de72383c-18f9-54a1-8833-4834d8c05630", 00:18:26.300 "is_configured": true, 00:18:26.300 "data_offset": 256, 00:18:26.300 "data_size": 7936 00:18:26.300 }, 00:18:26.300 { 00:18:26.300 "name": "BaseBdev2", 00:18:26.300 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:26.300 "is_configured": true, 00:18:26.300 "data_offset": 256, 00:18:26.300 "data_size": 7936 00:18:26.300 } 00:18:26.300 ] 00:18:26.300 }' 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.300 [2024-11-29 07:50:16.217927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.300 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.558 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.558 "name": "raid_bdev1", 00:18:26.558 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:26.558 "strip_size_kb": 0, 00:18:26.558 "state": "online", 00:18:26.558 "raid_level": "raid1", 00:18:26.558 "superblock": true, 00:18:26.558 "num_base_bdevs": 2, 00:18:26.558 "num_base_bdevs_discovered": 1, 00:18:26.558 "num_base_bdevs_operational": 1, 00:18:26.558 "base_bdevs_list": [ 00:18:26.558 { 00:18:26.558 "name": null, 00:18:26.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.558 "is_configured": false, 00:18:26.558 "data_offset": 0, 00:18:26.558 "data_size": 7936 00:18:26.558 }, 00:18:26.558 { 00:18:26.558 "name": "BaseBdev2", 00:18:26.558 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:26.558 "is_configured": true, 00:18:26.558 "data_offset": 256, 00:18:26.558 "data_size": 7936 00:18:26.558 } 00:18:26.558 ] 00:18:26.558 }' 00:18:26.558 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.559 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.818 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:26.818 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.818 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.818 [2024-11-29 07:50:16.677181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.818 [2024-11-29 07:50:16.677395] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:26.818 [2024-11-29 07:50:16.677417] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:26.818 [2024-11-29 07:50:16.677447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.818 [2024-11-29 07:50:16.689996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:26.818 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.818 07:50:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:26.818 [2024-11-29 07:50:16.691717] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:27.758 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.758 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.758 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.758 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.758 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.017 "name": "raid_bdev1", 00:18:28.017 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:28.017 "strip_size_kb": 0, 00:18:28.017 "state": "online", 00:18:28.017 "raid_level": "raid1", 00:18:28.017 "superblock": true, 00:18:28.017 "num_base_bdevs": 2, 00:18:28.017 "num_base_bdevs_discovered": 2, 00:18:28.017 "num_base_bdevs_operational": 2, 00:18:28.017 "process": { 00:18:28.017 "type": "rebuild", 00:18:28.017 "target": "spare", 00:18:28.017 "progress": { 00:18:28.017 "blocks": 2560, 00:18:28.017 "percent": 32 00:18:28.017 } 00:18:28.017 }, 00:18:28.017 "base_bdevs_list": [ 00:18:28.017 { 00:18:28.017 "name": "spare", 00:18:28.017 "uuid": "de72383c-18f9-54a1-8833-4834d8c05630", 00:18:28.017 "is_configured": true, 00:18:28.017 "data_offset": 256, 00:18:28.017 "data_size": 7936 00:18:28.017 }, 00:18:28.017 { 00:18:28.017 "name": "BaseBdev2", 00:18:28.017 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:28.017 "is_configured": true, 00:18:28.017 "data_offset": 256, 00:18:28.017 "data_size": 7936 00:18:28.017 } 00:18:28.017 ] 00:18:28.017 }' 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.017 [2024-11-29 07:50:17.852103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.017 [2024-11-29 07:50:17.896459] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:28.017 [2024-11-29 07:50:17.896560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.017 [2024-11-29 07:50:17.896592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.017 [2024-11-29 07:50:17.896626] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.017 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.276 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.276 "name": "raid_bdev1", 00:18:28.276 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:28.276 "strip_size_kb": 0, 00:18:28.276 "state": "online", 00:18:28.276 "raid_level": "raid1", 00:18:28.276 "superblock": true, 00:18:28.276 "num_base_bdevs": 2, 00:18:28.276 "num_base_bdevs_discovered": 1, 00:18:28.276 "num_base_bdevs_operational": 1, 00:18:28.276 "base_bdevs_list": [ 00:18:28.276 { 00:18:28.276 "name": null, 00:18:28.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.276 "is_configured": false, 00:18:28.276 "data_offset": 0, 00:18:28.276 "data_size": 7936 00:18:28.276 }, 00:18:28.276 { 00:18:28.276 "name": "BaseBdev2", 00:18:28.276 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:28.276 "is_configured": true, 00:18:28.276 "data_offset": 256, 00:18:28.276 "data_size": 7936 00:18:28.276 } 00:18:28.276 ] 00:18:28.276 }' 00:18:28.276 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.276 07:50:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.535 07:50:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:28.535 07:50:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.535 07:50:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.535 [2024-11-29 07:50:18.338275] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:28.535 [2024-11-29 07:50:18.338376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.535 [2024-11-29 07:50:18.338416] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:28.535 [2024-11-29 07:50:18.338445] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.535 [2024-11-29 07:50:18.338703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.535 [2024-11-29 07:50:18.338759] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:28.535 [2024-11-29 07:50:18.338837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:28.535 [2024-11-29 07:50:18.338874] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:28.535 [2024-11-29 07:50:18.338917] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:28.535 [2024-11-29 07:50:18.338967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.535 [2024-11-29 07:50:18.352490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:28.535 spare 00:18:28.535 07:50:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.535 [2024-11-29 07:50:18.354264] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.535 07:50:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:29.475 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.475 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.476 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.476 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.476 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.476 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.476 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.476 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.476 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.476 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.476 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.476 "name": "raid_bdev1", 00:18:29.476 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:29.476 "strip_size_kb": 0, 00:18:29.476 "state": "online", 00:18:29.476 "raid_level": "raid1", 00:18:29.476 "superblock": true, 00:18:29.476 "num_base_bdevs": 2, 00:18:29.476 "num_base_bdevs_discovered": 2, 00:18:29.476 "num_base_bdevs_operational": 2, 00:18:29.476 "process": { 00:18:29.476 "type": "rebuild", 00:18:29.476 "target": "spare", 00:18:29.476 "progress": { 00:18:29.476 "blocks": 2560, 00:18:29.476 "percent": 32 00:18:29.476 } 00:18:29.476 }, 00:18:29.476 "base_bdevs_list": [ 00:18:29.476 { 00:18:29.476 "name": "spare", 00:18:29.476 "uuid": "de72383c-18f9-54a1-8833-4834d8c05630", 00:18:29.476 "is_configured": true, 00:18:29.476 "data_offset": 256, 00:18:29.476 "data_size": 7936 00:18:29.476 }, 00:18:29.476 { 00:18:29.476 "name": "BaseBdev2", 00:18:29.476 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:29.476 "is_configured": true, 00:18:29.476 "data_offset": 256, 00:18:29.476 "data_size": 7936 00:18:29.476 } 00:18:29.476 ] 00:18:29.476 }' 00:18:29.476 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.734 [2024-11-29 07:50:19.518510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.734 [2024-11-29 07:50:19.558897] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:29.734 [2024-11-29 07:50:19.558949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.734 [2024-11-29 07:50:19.558965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.734 [2024-11-29 07:50:19.558971] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.734 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.734 "name": "raid_bdev1", 00:18:29.734 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:29.734 "strip_size_kb": 0, 00:18:29.734 "state": "online", 00:18:29.734 "raid_level": "raid1", 00:18:29.734 "superblock": true, 00:18:29.734 "num_base_bdevs": 2, 00:18:29.734 "num_base_bdevs_discovered": 1, 00:18:29.734 "num_base_bdevs_operational": 1, 00:18:29.734 "base_bdevs_list": [ 00:18:29.734 { 00:18:29.734 "name": null, 00:18:29.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.734 "is_configured": false, 00:18:29.734 "data_offset": 0, 00:18:29.734 "data_size": 7936 00:18:29.734 }, 00:18:29.734 { 00:18:29.734 "name": "BaseBdev2", 00:18:29.734 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:29.734 "is_configured": true, 00:18:29.734 "data_offset": 256, 00:18:29.734 "data_size": 7936 00:18:29.734 } 00:18:29.734 ] 00:18:29.734 }' 00:18:29.735 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.735 07:50:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.304 "name": "raid_bdev1", 00:18:30.304 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:30.304 "strip_size_kb": 0, 00:18:30.304 "state": "online", 00:18:30.304 "raid_level": "raid1", 00:18:30.304 "superblock": true, 00:18:30.304 "num_base_bdevs": 2, 00:18:30.304 "num_base_bdevs_discovered": 1, 00:18:30.304 "num_base_bdevs_operational": 1, 00:18:30.304 "base_bdevs_list": [ 00:18:30.304 { 00:18:30.304 "name": null, 00:18:30.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.304 "is_configured": false, 00:18:30.304 "data_offset": 0, 00:18:30.304 "data_size": 7936 00:18:30.304 }, 00:18:30.304 { 00:18:30.304 "name": "BaseBdev2", 00:18:30.304 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:30.304 "is_configured": true, 00:18:30.304 "data_offset": 256, 00:18:30.304 "data_size": 7936 00:18:30.304 } 00:18:30.304 ] 00:18:30.304 }' 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.304 [2024-11-29 07:50:20.184688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:30.304 [2024-11-29 07:50:20.184785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.304 [2024-11-29 07:50:20.184822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:30.304 [2024-11-29 07:50:20.184847] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.304 [2024-11-29 07:50:20.185087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.304 [2024-11-29 07:50:20.185155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:30.304 [2024-11-29 07:50:20.185228] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:30.304 [2024-11-29 07:50:20.185266] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:30.304 [2024-11-29 07:50:20.185304] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:30.304 [2024-11-29 07:50:20.185362] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:30.304 BaseBdev1 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.304 07:50:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.686 "name": "raid_bdev1", 00:18:31.686 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:31.686 "strip_size_kb": 0, 00:18:31.686 "state": "online", 00:18:31.686 "raid_level": "raid1", 00:18:31.686 "superblock": true, 00:18:31.686 "num_base_bdevs": 2, 00:18:31.686 "num_base_bdevs_discovered": 1, 00:18:31.686 "num_base_bdevs_operational": 1, 00:18:31.686 "base_bdevs_list": [ 00:18:31.686 { 00:18:31.686 "name": null, 00:18:31.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.686 "is_configured": false, 00:18:31.686 "data_offset": 0, 00:18:31.686 "data_size": 7936 00:18:31.686 }, 00:18:31.686 { 00:18:31.686 "name": "BaseBdev2", 00:18:31.686 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:31.686 "is_configured": true, 00:18:31.686 "data_offset": 256, 00:18:31.686 "data_size": 7936 00:18:31.686 } 00:18:31.686 ] 00:18:31.686 }' 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.686 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.946 "name": "raid_bdev1", 00:18:31.946 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:31.946 "strip_size_kb": 0, 00:18:31.946 "state": "online", 00:18:31.946 "raid_level": "raid1", 00:18:31.946 "superblock": true, 00:18:31.946 "num_base_bdevs": 2, 00:18:31.946 "num_base_bdevs_discovered": 1, 00:18:31.946 "num_base_bdevs_operational": 1, 00:18:31.946 "base_bdevs_list": [ 00:18:31.946 { 00:18:31.946 "name": null, 00:18:31.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.946 "is_configured": false, 00:18:31.946 "data_offset": 0, 00:18:31.946 "data_size": 7936 00:18:31.946 }, 00:18:31.946 { 00:18:31.946 "name": "BaseBdev2", 00:18:31.946 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:31.946 "is_configured": true, 00:18:31.946 "data_offset": 256, 00:18:31.946 "data_size": 7936 00:18:31.946 } 00:18:31.946 ] 00:18:31.946 }' 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.946 [2024-11-29 07:50:21.805927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.946 [2024-11-29 07:50:21.806125] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:31.946 [2024-11-29 07:50:21.806163] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:31.946 request: 00:18:31.946 { 00:18:31.946 "base_bdev": "BaseBdev1", 00:18:31.946 "raid_bdev": "raid_bdev1", 00:18:31.946 "method": "bdev_raid_add_base_bdev", 00:18:31.946 "req_id": 1 00:18:31.946 } 00:18:31.946 Got JSON-RPC error response 00:18:31.946 response: 00:18:31.946 { 00:18:31.946 "code": -22, 00:18:31.946 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:31.946 } 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.946 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.947 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.947 07:50:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:32.885 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.885 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.885 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.885 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.885 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.885 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.885 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.885 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.885 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.885 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.145 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.145 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.145 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.145 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.145 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.145 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.145 "name": "raid_bdev1", 00:18:33.145 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:33.145 "strip_size_kb": 0, 00:18:33.145 "state": "online", 00:18:33.145 "raid_level": "raid1", 00:18:33.145 "superblock": true, 00:18:33.145 "num_base_bdevs": 2, 00:18:33.145 "num_base_bdevs_discovered": 1, 00:18:33.145 "num_base_bdevs_operational": 1, 00:18:33.145 "base_bdevs_list": [ 00:18:33.145 { 00:18:33.145 "name": null, 00:18:33.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.145 "is_configured": false, 00:18:33.145 "data_offset": 0, 00:18:33.145 "data_size": 7936 00:18:33.145 }, 00:18:33.145 { 00:18:33.145 "name": "BaseBdev2", 00:18:33.145 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:33.145 "is_configured": true, 00:18:33.145 "data_offset": 256, 00:18:33.145 "data_size": 7936 00:18:33.145 } 00:18:33.145 ] 00:18:33.145 }' 00:18:33.145 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.145 07:50:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.406 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.406 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.406 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.406 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.406 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.406 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.406 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.406 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.406 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.406 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.406 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.406 "name": "raid_bdev1", 00:18:33.406 "uuid": "95de83bd-bc4e-4c6d-b42f-21dc5fbe7e2a", 00:18:33.406 "strip_size_kb": 0, 00:18:33.406 "state": "online", 00:18:33.406 "raid_level": "raid1", 00:18:33.406 "superblock": true, 00:18:33.406 "num_base_bdevs": 2, 00:18:33.406 "num_base_bdevs_discovered": 1, 00:18:33.406 "num_base_bdevs_operational": 1, 00:18:33.406 "base_bdevs_list": [ 00:18:33.406 { 00:18:33.406 "name": null, 00:18:33.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.406 "is_configured": false, 00:18:33.406 "data_offset": 0, 00:18:33.406 "data_size": 7936 00:18:33.406 }, 00:18:33.406 { 00:18:33.406 "name": "BaseBdev2", 00:18:33.406 "uuid": "90f5a644-88b5-5dd3-a1c2-9f9fbb395007", 00:18:33.406 "is_configured": true, 00:18:33.406 "data_offset": 256, 00:18:33.406 "data_size": 7936 00:18:33.406 } 00:18:33.406 ] 00:18:33.406 }' 00:18:33.406 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.666 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.666 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.666 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.666 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87434 00:18:33.666 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87434 ']' 00:18:33.666 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87434 00:18:33.666 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:33.666 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.666 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87434 00:18:33.666 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.666 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.666 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87434' 00:18:33.666 killing process with pid 87434 00:18:33.666 Received shutdown signal, test time was about 60.000000 seconds 00:18:33.666 00:18:33.666 Latency(us) 00:18:33.666 [2024-11-29T07:50:23.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.666 [2024-11-29T07:50:23.611Z] =================================================================================================================== 00:18:33.666 [2024-11-29T07:50:23.611Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:33.667 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87434 00:18:33.667 [2024-11-29 07:50:23.473800] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:33.667 [2024-11-29 07:50:23.473909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.667 [2024-11-29 07:50:23.473953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.667 [2024-11-29 07:50:23.473965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:33.667 07:50:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87434 00:18:33.926 [2024-11-29 07:50:23.771549] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.920 07:50:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:34.920 00:18:34.920 real 0m19.700s 00:18:34.920 user 0m25.720s 00:18:34.920 sys 0m2.665s 00:18:34.920 07:50:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.920 ************************************ 00:18:34.920 END TEST raid_rebuild_test_sb_md_separate 00:18:34.920 ************************************ 00:18:34.920 07:50:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.242 07:50:24 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:35.242 07:50:24 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:35.242 07:50:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:35.242 07:50:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.242 07:50:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:35.242 ************************************ 00:18:35.242 START TEST raid_state_function_test_sb_md_interleaved 00:18:35.242 ************************************ 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88120 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:35.242 Process raid pid: 88120 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88120' 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88120 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88120 ']' 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.242 07:50:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.242 [2024-11-29 07:50:25.002007] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:35.242 [2024-11-29 07:50:25.002268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.537 [2024-11-29 07:50:25.183441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.537 [2024-11-29 07:50:25.293512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.796 [2024-11-29 07:50:25.494085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.796 [2024-11-29 07:50:25.494123] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.055 [2024-11-29 07:50:25.810325] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:36.055 [2024-11-29 07:50:25.810383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:36.055 [2024-11-29 07:50:25.810393] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:36.055 [2024-11-29 07:50:25.810402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.055 "name": "Existed_Raid", 00:18:36.055 "uuid": "91d9c018-334f-4acf-9b01-f61298d52856", 00:18:36.055 "strip_size_kb": 0, 00:18:36.055 "state": "configuring", 00:18:36.055 "raid_level": "raid1", 00:18:36.055 "superblock": true, 00:18:36.055 "num_base_bdevs": 2, 00:18:36.055 "num_base_bdevs_discovered": 0, 00:18:36.055 "num_base_bdevs_operational": 2, 00:18:36.055 "base_bdevs_list": [ 00:18:36.055 { 00:18:36.055 "name": "BaseBdev1", 00:18:36.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.055 "is_configured": false, 00:18:36.055 "data_offset": 0, 00:18:36.055 "data_size": 0 00:18:36.055 }, 00:18:36.055 { 00:18:36.055 "name": "BaseBdev2", 00:18:36.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.055 "is_configured": false, 00:18:36.055 "data_offset": 0, 00:18:36.055 "data_size": 0 00:18:36.055 } 00:18:36.055 ] 00:18:36.055 }' 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.055 07:50:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.314 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:36.314 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.314 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.314 [2024-11-29 07:50:26.249504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:36.314 [2024-11-29 07:50:26.249590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:36.314 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.314 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:36.314 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.314 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.572 [2024-11-29 07:50:26.261486] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:36.572 [2024-11-29 07:50:26.261574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:36.572 [2024-11-29 07:50:26.261617] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:36.572 [2024-11-29 07:50:26.261642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.572 [2024-11-29 07:50:26.309472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.572 BaseBdev1 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.572 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.572 [ 00:18:36.572 { 00:18:36.572 "name": "BaseBdev1", 00:18:36.572 "aliases": [ 00:18:36.572 "525e1b96-1df9-4b3d-99a3-a2ff3f189b52" 00:18:36.572 ], 00:18:36.572 "product_name": "Malloc disk", 00:18:36.572 "block_size": 4128, 00:18:36.572 "num_blocks": 8192, 00:18:36.572 "uuid": "525e1b96-1df9-4b3d-99a3-a2ff3f189b52", 00:18:36.572 "md_size": 32, 00:18:36.572 "md_interleave": true, 00:18:36.572 "dif_type": 0, 00:18:36.572 "assigned_rate_limits": { 00:18:36.572 "rw_ios_per_sec": 0, 00:18:36.572 "rw_mbytes_per_sec": 0, 00:18:36.572 "r_mbytes_per_sec": 0, 00:18:36.572 "w_mbytes_per_sec": 0 00:18:36.572 }, 00:18:36.572 "claimed": true, 00:18:36.572 "claim_type": "exclusive_write", 00:18:36.572 "zoned": false, 00:18:36.572 "supported_io_types": { 00:18:36.572 "read": true, 00:18:36.572 "write": true, 00:18:36.572 "unmap": true, 00:18:36.572 "flush": true, 00:18:36.572 "reset": true, 00:18:36.572 "nvme_admin": false, 00:18:36.572 "nvme_io": false, 00:18:36.572 "nvme_io_md": false, 00:18:36.572 "write_zeroes": true, 00:18:36.572 "zcopy": true, 00:18:36.572 "get_zone_info": false, 00:18:36.572 "zone_management": false, 00:18:36.572 "zone_append": false, 00:18:36.573 "compare": false, 00:18:36.573 "compare_and_write": false, 00:18:36.573 "abort": true, 00:18:36.573 "seek_hole": false, 00:18:36.573 "seek_data": false, 00:18:36.573 "copy": true, 00:18:36.573 "nvme_iov_md": false 00:18:36.573 }, 00:18:36.573 "memory_domains": [ 00:18:36.573 { 00:18:36.573 "dma_device_id": "system", 00:18:36.573 "dma_device_type": 1 00:18:36.573 }, 00:18:36.573 { 00:18:36.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.573 "dma_device_type": 2 00:18:36.573 } 00:18:36.573 ], 00:18:36.573 "driver_specific": {} 00:18:36.573 } 00:18:36.573 ] 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.573 "name": "Existed_Raid", 00:18:36.573 "uuid": "ff7d76a7-4dd1-4dc8-a67e-51889f8eaffc", 00:18:36.573 "strip_size_kb": 0, 00:18:36.573 "state": "configuring", 00:18:36.573 "raid_level": "raid1", 00:18:36.573 "superblock": true, 00:18:36.573 "num_base_bdevs": 2, 00:18:36.573 "num_base_bdevs_discovered": 1, 00:18:36.573 "num_base_bdevs_operational": 2, 00:18:36.573 "base_bdevs_list": [ 00:18:36.573 { 00:18:36.573 "name": "BaseBdev1", 00:18:36.573 "uuid": "525e1b96-1df9-4b3d-99a3-a2ff3f189b52", 00:18:36.573 "is_configured": true, 00:18:36.573 "data_offset": 256, 00:18:36.573 "data_size": 7936 00:18:36.573 }, 00:18:36.573 { 00:18:36.573 "name": "BaseBdev2", 00:18:36.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.573 "is_configured": false, 00:18:36.573 "data_offset": 0, 00:18:36.573 "data_size": 0 00:18:36.573 } 00:18:36.573 ] 00:18:36.573 }' 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.573 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.831 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:36.831 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.831 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.831 [2024-11-29 07:50:26.768762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:36.831 [2024-11-29 07:50:26.768802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:36.831 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.831 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:36.831 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.831 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.090 [2024-11-29 07:50:26.780794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:37.090 [2024-11-29 07:50:26.782523] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:37.090 [2024-11-29 07:50:26.782598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.090 "name": "Existed_Raid", 00:18:37.090 "uuid": "34e15922-f27f-4640-8fb2-2fad5c69bb42", 00:18:37.090 "strip_size_kb": 0, 00:18:37.090 "state": "configuring", 00:18:37.090 "raid_level": "raid1", 00:18:37.090 "superblock": true, 00:18:37.090 "num_base_bdevs": 2, 00:18:37.090 "num_base_bdevs_discovered": 1, 00:18:37.090 "num_base_bdevs_operational": 2, 00:18:37.090 "base_bdevs_list": [ 00:18:37.090 { 00:18:37.090 "name": "BaseBdev1", 00:18:37.090 "uuid": "525e1b96-1df9-4b3d-99a3-a2ff3f189b52", 00:18:37.090 "is_configured": true, 00:18:37.090 "data_offset": 256, 00:18:37.090 "data_size": 7936 00:18:37.090 }, 00:18:37.090 { 00:18:37.090 "name": "BaseBdev2", 00:18:37.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.090 "is_configured": false, 00:18:37.090 "data_offset": 0, 00:18:37.090 "data_size": 0 00:18:37.090 } 00:18:37.090 ] 00:18:37.090 }' 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.090 07:50:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.349 [2024-11-29 07:50:27.276509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.349 [2024-11-29 07:50:27.276709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:37.349 [2024-11-29 07:50:27.276722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:37.349 [2024-11-29 07:50:27.276797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:37.349 [2024-11-29 07:50:27.276864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:37.349 [2024-11-29 07:50:27.276873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:37.349 [2024-11-29 07:50:27.276926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.349 BaseBdev2 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.349 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.608 [ 00:18:37.608 { 00:18:37.608 "name": "BaseBdev2", 00:18:37.608 "aliases": [ 00:18:37.608 "c9d3cdb4-cfe7-4a42-bc13-f6a9c9429e25" 00:18:37.608 ], 00:18:37.608 "product_name": "Malloc disk", 00:18:37.608 "block_size": 4128, 00:18:37.608 "num_blocks": 8192, 00:18:37.608 "uuid": "c9d3cdb4-cfe7-4a42-bc13-f6a9c9429e25", 00:18:37.608 "md_size": 32, 00:18:37.608 "md_interleave": true, 00:18:37.608 "dif_type": 0, 00:18:37.608 "assigned_rate_limits": { 00:18:37.608 "rw_ios_per_sec": 0, 00:18:37.608 "rw_mbytes_per_sec": 0, 00:18:37.608 "r_mbytes_per_sec": 0, 00:18:37.608 "w_mbytes_per_sec": 0 00:18:37.608 }, 00:18:37.608 "claimed": true, 00:18:37.608 "claim_type": "exclusive_write", 00:18:37.608 "zoned": false, 00:18:37.608 "supported_io_types": { 00:18:37.608 "read": true, 00:18:37.608 "write": true, 00:18:37.608 "unmap": true, 00:18:37.608 "flush": true, 00:18:37.608 "reset": true, 00:18:37.608 "nvme_admin": false, 00:18:37.608 "nvme_io": false, 00:18:37.608 "nvme_io_md": false, 00:18:37.608 "write_zeroes": true, 00:18:37.608 "zcopy": true, 00:18:37.608 "get_zone_info": false, 00:18:37.608 "zone_management": false, 00:18:37.608 "zone_append": false, 00:18:37.608 "compare": false, 00:18:37.608 "compare_and_write": false, 00:18:37.608 "abort": true, 00:18:37.608 "seek_hole": false, 00:18:37.608 "seek_data": false, 00:18:37.608 "copy": true, 00:18:37.608 "nvme_iov_md": false 00:18:37.608 }, 00:18:37.608 "memory_domains": [ 00:18:37.608 { 00:18:37.608 "dma_device_id": "system", 00:18:37.608 "dma_device_type": 1 00:18:37.608 }, 00:18:37.608 { 00:18:37.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.608 "dma_device_type": 2 00:18:37.608 } 00:18:37.608 ], 00:18:37.608 "driver_specific": {} 00:18:37.608 } 00:18:37.608 ] 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.608 "name": "Existed_Raid", 00:18:37.608 "uuid": "34e15922-f27f-4640-8fb2-2fad5c69bb42", 00:18:37.608 "strip_size_kb": 0, 00:18:37.608 "state": "online", 00:18:37.608 "raid_level": "raid1", 00:18:37.608 "superblock": true, 00:18:37.608 "num_base_bdevs": 2, 00:18:37.608 "num_base_bdevs_discovered": 2, 00:18:37.608 "num_base_bdevs_operational": 2, 00:18:37.608 "base_bdevs_list": [ 00:18:37.608 { 00:18:37.608 "name": "BaseBdev1", 00:18:37.608 "uuid": "525e1b96-1df9-4b3d-99a3-a2ff3f189b52", 00:18:37.608 "is_configured": true, 00:18:37.608 "data_offset": 256, 00:18:37.608 "data_size": 7936 00:18:37.608 }, 00:18:37.608 { 00:18:37.608 "name": "BaseBdev2", 00:18:37.608 "uuid": "c9d3cdb4-cfe7-4a42-bc13-f6a9c9429e25", 00:18:37.608 "is_configured": true, 00:18:37.608 "data_offset": 256, 00:18:37.608 "data_size": 7936 00:18:37.608 } 00:18:37.608 ] 00:18:37.608 }' 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.608 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.866 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:37.866 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:37.866 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:37.866 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:37.866 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:37.866 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:37.866 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:37.866 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:37.866 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.866 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.866 [2024-11-29 07:50:27.807902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.125 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.125 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:38.125 "name": "Existed_Raid", 00:18:38.125 "aliases": [ 00:18:38.125 "34e15922-f27f-4640-8fb2-2fad5c69bb42" 00:18:38.125 ], 00:18:38.125 "product_name": "Raid Volume", 00:18:38.125 "block_size": 4128, 00:18:38.125 "num_blocks": 7936, 00:18:38.125 "uuid": "34e15922-f27f-4640-8fb2-2fad5c69bb42", 00:18:38.125 "md_size": 32, 00:18:38.125 "md_interleave": true, 00:18:38.125 "dif_type": 0, 00:18:38.125 "assigned_rate_limits": { 00:18:38.125 "rw_ios_per_sec": 0, 00:18:38.125 "rw_mbytes_per_sec": 0, 00:18:38.125 "r_mbytes_per_sec": 0, 00:18:38.125 "w_mbytes_per_sec": 0 00:18:38.125 }, 00:18:38.125 "claimed": false, 00:18:38.125 "zoned": false, 00:18:38.125 "supported_io_types": { 00:18:38.125 "read": true, 00:18:38.125 "write": true, 00:18:38.125 "unmap": false, 00:18:38.125 "flush": false, 00:18:38.125 "reset": true, 00:18:38.125 "nvme_admin": false, 00:18:38.125 "nvme_io": false, 00:18:38.125 "nvme_io_md": false, 00:18:38.125 "write_zeroes": true, 00:18:38.125 "zcopy": false, 00:18:38.125 "get_zone_info": false, 00:18:38.125 "zone_management": false, 00:18:38.125 "zone_append": false, 00:18:38.125 "compare": false, 00:18:38.125 "compare_and_write": false, 00:18:38.125 "abort": false, 00:18:38.125 "seek_hole": false, 00:18:38.125 "seek_data": false, 00:18:38.125 "copy": false, 00:18:38.125 "nvme_iov_md": false 00:18:38.125 }, 00:18:38.125 "memory_domains": [ 00:18:38.125 { 00:18:38.125 "dma_device_id": "system", 00:18:38.125 "dma_device_type": 1 00:18:38.125 }, 00:18:38.125 { 00:18:38.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.125 "dma_device_type": 2 00:18:38.125 }, 00:18:38.125 { 00:18:38.125 "dma_device_id": "system", 00:18:38.125 "dma_device_type": 1 00:18:38.125 }, 00:18:38.125 { 00:18:38.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.125 "dma_device_type": 2 00:18:38.125 } 00:18:38.125 ], 00:18:38.125 "driver_specific": { 00:18:38.125 "raid": { 00:18:38.125 "uuid": "34e15922-f27f-4640-8fb2-2fad5c69bb42", 00:18:38.125 "strip_size_kb": 0, 00:18:38.125 "state": "online", 00:18:38.125 "raid_level": "raid1", 00:18:38.125 "superblock": true, 00:18:38.125 "num_base_bdevs": 2, 00:18:38.125 "num_base_bdevs_discovered": 2, 00:18:38.125 "num_base_bdevs_operational": 2, 00:18:38.125 "base_bdevs_list": [ 00:18:38.125 { 00:18:38.125 "name": "BaseBdev1", 00:18:38.125 "uuid": "525e1b96-1df9-4b3d-99a3-a2ff3f189b52", 00:18:38.125 "is_configured": true, 00:18:38.125 "data_offset": 256, 00:18:38.125 "data_size": 7936 00:18:38.125 }, 00:18:38.125 { 00:18:38.125 "name": "BaseBdev2", 00:18:38.125 "uuid": "c9d3cdb4-cfe7-4a42-bc13-f6a9c9429e25", 00:18:38.125 "is_configured": true, 00:18:38.125 "data_offset": 256, 00:18:38.125 "data_size": 7936 00:18:38.125 } 00:18:38.125 ] 00:18:38.125 } 00:18:38.125 } 00:18:38.125 }' 00:18:38.125 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:38.125 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:38.125 BaseBdev2' 00:18:38.125 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.125 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:38.125 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.126 07:50:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.126 [2024-11-29 07:50:28.003343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.384 "name": "Existed_Raid", 00:18:38.384 "uuid": "34e15922-f27f-4640-8fb2-2fad5c69bb42", 00:18:38.384 "strip_size_kb": 0, 00:18:38.384 "state": "online", 00:18:38.384 "raid_level": "raid1", 00:18:38.384 "superblock": true, 00:18:38.384 "num_base_bdevs": 2, 00:18:38.384 "num_base_bdevs_discovered": 1, 00:18:38.384 "num_base_bdevs_operational": 1, 00:18:38.384 "base_bdevs_list": [ 00:18:38.384 { 00:18:38.384 "name": null, 00:18:38.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.384 "is_configured": false, 00:18:38.384 "data_offset": 0, 00:18:38.384 "data_size": 7936 00:18:38.384 }, 00:18:38.384 { 00:18:38.384 "name": "BaseBdev2", 00:18:38.384 "uuid": "c9d3cdb4-cfe7-4a42-bc13-f6a9c9429e25", 00:18:38.384 "is_configured": true, 00:18:38.384 "data_offset": 256, 00:18:38.384 "data_size": 7936 00:18:38.384 } 00:18:38.384 ] 00:18:38.384 }' 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.384 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.643 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:38.643 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:38.643 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:38.643 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.643 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.643 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.643 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.643 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:38.643 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:38.643 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:38.643 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.643 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.643 [2024-11-29 07:50:28.568871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:38.643 [2024-11-29 07:50:28.569056] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.902 [2024-11-29 07:50:28.658432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.902 [2024-11-29 07:50:28.658560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.902 [2024-11-29 07:50:28.658601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88120 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88120 ']' 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88120 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88120 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.902 killing process with pid 88120 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88120' 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88120 00:18:38.902 [2024-11-29 07:50:28.754773] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.902 07:50:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88120 00:18:38.902 [2024-11-29 07:50:28.770653] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:40.279 ************************************ 00:18:40.279 END TEST raid_state_function_test_sb_md_interleaved 00:18:40.279 ************************************ 00:18:40.279 07:50:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:40.279 00:18:40.279 real 0m4.938s 00:18:40.279 user 0m7.077s 00:18:40.279 sys 0m0.910s 00:18:40.279 07:50:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.279 07:50:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.279 07:50:29 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:40.279 07:50:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:40.279 07:50:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.279 07:50:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.279 ************************************ 00:18:40.279 START TEST raid_superblock_test_md_interleaved 00:18:40.279 ************************************ 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88369 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88369 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88369 ']' 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.279 07:50:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.279 [2024-11-29 07:50:29.996252] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:40.279 [2024-11-29 07:50:29.996429] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88369 ] 00:18:40.279 [2024-11-29 07:50:30.169041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.539 [2024-11-29 07:50:30.272089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.539 [2024-11-29 07:50:30.467449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.539 [2024-11-29 07:50:30.467530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.107 malloc1 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.107 [2024-11-29 07:50:30.882258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:41.107 [2024-11-29 07:50:30.882399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.107 [2024-11-29 07:50:30.882435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:41.107 [2024-11-29 07:50:30.882463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.107 [2024-11-29 07:50:30.884221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.107 [2024-11-29 07:50:30.884307] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:41.107 pt1 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.107 malloc2 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.107 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.107 [2024-11-29 07:50:30.941272] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.107 [2024-11-29 07:50:30.941327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.108 [2024-11-29 07:50:30.941346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:41.108 [2024-11-29 07:50:30.941355] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.108 [2024-11-29 07:50:30.943065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.108 [2024-11-29 07:50:30.943119] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.108 pt2 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.108 [2024-11-29 07:50:30.953280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:41.108 [2024-11-29 07:50:30.954976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.108 [2024-11-29 07:50:30.955164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:41.108 [2024-11-29 07:50:30.955178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:41.108 [2024-11-29 07:50:30.955250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:41.108 [2024-11-29 07:50:30.955316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:41.108 [2024-11-29 07:50:30.955327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:41.108 [2024-11-29 07:50:30.955393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.108 07:50:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.108 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.108 "name": "raid_bdev1", 00:18:41.108 "uuid": "104383bd-c2d6-4834-a40f-709b7dcd4aa1", 00:18:41.108 "strip_size_kb": 0, 00:18:41.108 "state": "online", 00:18:41.108 "raid_level": "raid1", 00:18:41.108 "superblock": true, 00:18:41.108 "num_base_bdevs": 2, 00:18:41.108 "num_base_bdevs_discovered": 2, 00:18:41.108 "num_base_bdevs_operational": 2, 00:18:41.108 "base_bdevs_list": [ 00:18:41.108 { 00:18:41.108 "name": "pt1", 00:18:41.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.108 "is_configured": true, 00:18:41.108 "data_offset": 256, 00:18:41.108 "data_size": 7936 00:18:41.108 }, 00:18:41.108 { 00:18:41.108 "name": "pt2", 00:18:41.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.108 "is_configured": true, 00:18:41.108 "data_offset": 256, 00:18:41.108 "data_size": 7936 00:18:41.108 } 00:18:41.108 ] 00:18:41.108 }' 00:18:41.108 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.108 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.676 [2024-11-29 07:50:31.432729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:41.676 "name": "raid_bdev1", 00:18:41.676 "aliases": [ 00:18:41.676 "104383bd-c2d6-4834-a40f-709b7dcd4aa1" 00:18:41.676 ], 00:18:41.676 "product_name": "Raid Volume", 00:18:41.676 "block_size": 4128, 00:18:41.676 "num_blocks": 7936, 00:18:41.676 "uuid": "104383bd-c2d6-4834-a40f-709b7dcd4aa1", 00:18:41.676 "md_size": 32, 00:18:41.676 "md_interleave": true, 00:18:41.676 "dif_type": 0, 00:18:41.676 "assigned_rate_limits": { 00:18:41.676 "rw_ios_per_sec": 0, 00:18:41.676 "rw_mbytes_per_sec": 0, 00:18:41.676 "r_mbytes_per_sec": 0, 00:18:41.676 "w_mbytes_per_sec": 0 00:18:41.676 }, 00:18:41.676 "claimed": false, 00:18:41.676 "zoned": false, 00:18:41.676 "supported_io_types": { 00:18:41.676 "read": true, 00:18:41.676 "write": true, 00:18:41.676 "unmap": false, 00:18:41.676 "flush": false, 00:18:41.676 "reset": true, 00:18:41.676 "nvme_admin": false, 00:18:41.676 "nvme_io": false, 00:18:41.676 "nvme_io_md": false, 00:18:41.676 "write_zeroes": true, 00:18:41.676 "zcopy": false, 00:18:41.676 "get_zone_info": false, 00:18:41.676 "zone_management": false, 00:18:41.676 "zone_append": false, 00:18:41.676 "compare": false, 00:18:41.676 "compare_and_write": false, 00:18:41.676 "abort": false, 00:18:41.676 "seek_hole": false, 00:18:41.676 "seek_data": false, 00:18:41.676 "copy": false, 00:18:41.676 "nvme_iov_md": false 00:18:41.676 }, 00:18:41.676 "memory_domains": [ 00:18:41.676 { 00:18:41.676 "dma_device_id": "system", 00:18:41.676 "dma_device_type": 1 00:18:41.676 }, 00:18:41.676 { 00:18:41.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.676 "dma_device_type": 2 00:18:41.676 }, 00:18:41.676 { 00:18:41.676 "dma_device_id": "system", 00:18:41.676 "dma_device_type": 1 00:18:41.676 }, 00:18:41.676 { 00:18:41.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.676 "dma_device_type": 2 00:18:41.676 } 00:18:41.676 ], 00:18:41.676 "driver_specific": { 00:18:41.676 "raid": { 00:18:41.676 "uuid": "104383bd-c2d6-4834-a40f-709b7dcd4aa1", 00:18:41.676 "strip_size_kb": 0, 00:18:41.676 "state": "online", 00:18:41.676 "raid_level": "raid1", 00:18:41.676 "superblock": true, 00:18:41.676 "num_base_bdevs": 2, 00:18:41.676 "num_base_bdevs_discovered": 2, 00:18:41.676 "num_base_bdevs_operational": 2, 00:18:41.676 "base_bdevs_list": [ 00:18:41.676 { 00:18:41.676 "name": "pt1", 00:18:41.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.676 "is_configured": true, 00:18:41.676 "data_offset": 256, 00:18:41.676 "data_size": 7936 00:18:41.676 }, 00:18:41.676 { 00:18:41.676 "name": "pt2", 00:18:41.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.676 "is_configured": true, 00:18:41.676 "data_offset": 256, 00:18:41.676 "data_size": 7936 00:18:41.676 } 00:18:41.676 ] 00:18:41.676 } 00:18:41.676 } 00:18:41.676 }' 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:41.676 pt2' 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:41.676 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.934 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:41.934 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.934 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.934 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 [2024-11-29 07:50:31.680283] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=104383bd-c2d6-4834-a40f-709b7dcd4aa1 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 104383bd-c2d6-4834-a40f-709b7dcd4aa1 ']' 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 [2024-11-29 07:50:31.708001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.935 [2024-11-29 07:50:31.708067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.935 [2024-11-29 07:50:31.708168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.935 [2024-11-29 07:50:31.708219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.935 [2024-11-29 07:50:31.708231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 [2024-11-29 07:50:31.847856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:41.935 [2024-11-29 07:50:31.849614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:41.935 [2024-11-29 07:50:31.849682] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:41.935 [2024-11-29 07:50:31.849740] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:41.935 [2024-11-29 07:50:31.849753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.935 [2024-11-29 07:50:31.849762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:41.935 request: 00:18:41.935 { 00:18:41.935 "name": "raid_bdev1", 00:18:41.935 "raid_level": "raid1", 00:18:41.935 "base_bdevs": [ 00:18:41.935 "malloc1", 00:18:41.935 "malloc2" 00:18:41.935 ], 00:18:41.935 "superblock": false, 00:18:41.935 "method": "bdev_raid_create", 00:18:41.935 "req_id": 1 00:18:41.935 } 00:18:41.935 Got JSON-RPC error response 00:18:41.935 response: 00:18:41.935 { 00:18:41.935 "code": -17, 00:18:41.935 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:41.935 } 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:41.935 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.194 [2024-11-29 07:50:31.911731] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:42.194 [2024-11-29 07:50:31.911826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.194 [2024-11-29 07:50:31.911857] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:42.194 [2024-11-29 07:50:31.911887] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.194 [2024-11-29 07:50:31.913698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.194 [2024-11-29 07:50:31.913783] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:42.194 [2024-11-29 07:50:31.913844] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:42.194 [2024-11-29 07:50:31.913922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:42.194 pt1 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.194 "name": "raid_bdev1", 00:18:42.194 "uuid": "104383bd-c2d6-4834-a40f-709b7dcd4aa1", 00:18:42.194 "strip_size_kb": 0, 00:18:42.194 "state": "configuring", 00:18:42.194 "raid_level": "raid1", 00:18:42.194 "superblock": true, 00:18:42.194 "num_base_bdevs": 2, 00:18:42.194 "num_base_bdevs_discovered": 1, 00:18:42.194 "num_base_bdevs_operational": 2, 00:18:42.194 "base_bdevs_list": [ 00:18:42.194 { 00:18:42.194 "name": "pt1", 00:18:42.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:42.194 "is_configured": true, 00:18:42.194 "data_offset": 256, 00:18:42.194 "data_size": 7936 00:18:42.194 }, 00:18:42.194 { 00:18:42.194 "name": null, 00:18:42.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.194 "is_configured": false, 00:18:42.194 "data_offset": 256, 00:18:42.194 "data_size": 7936 00:18:42.194 } 00:18:42.194 ] 00:18:42.194 }' 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.194 07:50:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.452 [2024-11-29 07:50:32.323010] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:42.452 [2024-11-29 07:50:32.323070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.452 [2024-11-29 07:50:32.323089] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:42.452 [2024-11-29 07:50:32.323113] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.452 [2024-11-29 07:50:32.323239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.452 [2024-11-29 07:50:32.323254] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:42.452 [2024-11-29 07:50:32.323292] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:42.452 [2024-11-29 07:50:32.323311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.452 [2024-11-29 07:50:32.323385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:42.452 [2024-11-29 07:50:32.323396] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:42.452 [2024-11-29 07:50:32.323480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:42.452 [2024-11-29 07:50:32.323551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:42.452 [2024-11-29 07:50:32.323559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:42.452 [2024-11-29 07:50:32.323618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.452 pt2 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.452 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.452 "name": "raid_bdev1", 00:18:42.452 "uuid": "104383bd-c2d6-4834-a40f-709b7dcd4aa1", 00:18:42.452 "strip_size_kb": 0, 00:18:42.452 "state": "online", 00:18:42.452 "raid_level": "raid1", 00:18:42.452 "superblock": true, 00:18:42.452 "num_base_bdevs": 2, 00:18:42.452 "num_base_bdevs_discovered": 2, 00:18:42.452 "num_base_bdevs_operational": 2, 00:18:42.452 "base_bdevs_list": [ 00:18:42.452 { 00:18:42.452 "name": "pt1", 00:18:42.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:42.452 "is_configured": true, 00:18:42.452 "data_offset": 256, 00:18:42.452 "data_size": 7936 00:18:42.452 }, 00:18:42.452 { 00:18:42.452 "name": "pt2", 00:18:42.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.452 "is_configured": true, 00:18:42.453 "data_offset": 256, 00:18:42.453 "data_size": 7936 00:18:42.453 } 00:18:42.453 ] 00:18:42.453 }' 00:18:42.453 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.453 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.018 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:43.018 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:43.018 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:43.018 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:43.018 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:43.018 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:43.018 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:43.018 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.018 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.018 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:43.018 [2024-11-29 07:50:32.770448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.018 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.018 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:43.018 "name": "raid_bdev1", 00:18:43.018 "aliases": [ 00:18:43.018 "104383bd-c2d6-4834-a40f-709b7dcd4aa1" 00:18:43.018 ], 00:18:43.018 "product_name": "Raid Volume", 00:18:43.018 "block_size": 4128, 00:18:43.018 "num_blocks": 7936, 00:18:43.018 "uuid": "104383bd-c2d6-4834-a40f-709b7dcd4aa1", 00:18:43.018 "md_size": 32, 00:18:43.018 "md_interleave": true, 00:18:43.018 "dif_type": 0, 00:18:43.018 "assigned_rate_limits": { 00:18:43.018 "rw_ios_per_sec": 0, 00:18:43.018 "rw_mbytes_per_sec": 0, 00:18:43.018 "r_mbytes_per_sec": 0, 00:18:43.018 "w_mbytes_per_sec": 0 00:18:43.018 }, 00:18:43.018 "claimed": false, 00:18:43.018 "zoned": false, 00:18:43.018 "supported_io_types": { 00:18:43.018 "read": true, 00:18:43.018 "write": true, 00:18:43.018 "unmap": false, 00:18:43.018 "flush": false, 00:18:43.018 "reset": true, 00:18:43.018 "nvme_admin": false, 00:18:43.018 "nvme_io": false, 00:18:43.018 "nvme_io_md": false, 00:18:43.018 "write_zeroes": true, 00:18:43.018 "zcopy": false, 00:18:43.018 "get_zone_info": false, 00:18:43.018 "zone_management": false, 00:18:43.019 "zone_append": false, 00:18:43.019 "compare": false, 00:18:43.019 "compare_and_write": false, 00:18:43.019 "abort": false, 00:18:43.019 "seek_hole": false, 00:18:43.019 "seek_data": false, 00:18:43.019 "copy": false, 00:18:43.019 "nvme_iov_md": false 00:18:43.019 }, 00:18:43.019 "memory_domains": [ 00:18:43.019 { 00:18:43.019 "dma_device_id": "system", 00:18:43.019 "dma_device_type": 1 00:18:43.019 }, 00:18:43.019 { 00:18:43.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.019 "dma_device_type": 2 00:18:43.019 }, 00:18:43.019 { 00:18:43.019 "dma_device_id": "system", 00:18:43.019 "dma_device_type": 1 00:18:43.019 }, 00:18:43.019 { 00:18:43.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.019 "dma_device_type": 2 00:18:43.019 } 00:18:43.019 ], 00:18:43.019 "driver_specific": { 00:18:43.019 "raid": { 00:18:43.019 "uuid": "104383bd-c2d6-4834-a40f-709b7dcd4aa1", 00:18:43.019 "strip_size_kb": 0, 00:18:43.019 "state": "online", 00:18:43.019 "raid_level": "raid1", 00:18:43.019 "superblock": true, 00:18:43.019 "num_base_bdevs": 2, 00:18:43.019 "num_base_bdevs_discovered": 2, 00:18:43.019 "num_base_bdevs_operational": 2, 00:18:43.019 "base_bdevs_list": [ 00:18:43.019 { 00:18:43.019 "name": "pt1", 00:18:43.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:43.019 "is_configured": true, 00:18:43.019 "data_offset": 256, 00:18:43.019 "data_size": 7936 00:18:43.019 }, 00:18:43.019 { 00:18:43.019 "name": "pt2", 00:18:43.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.019 "is_configured": true, 00:18:43.019 "data_offset": 256, 00:18:43.019 "data_size": 7936 00:18:43.019 } 00:18:43.019 ] 00:18:43.019 } 00:18:43.019 } 00:18:43.019 }' 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:43.019 pt2' 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.019 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.278 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.278 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:43.278 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:43.278 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:43.278 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.278 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.278 07:50:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:43.278 [2024-11-29 07:50:33.002109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 104383bd-c2d6-4834-a40f-709b7dcd4aa1 '!=' 104383bd-c2d6-4834-a40f-709b7dcd4aa1 ']' 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.278 [2024-11-29 07:50:33.053803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.278 "name": "raid_bdev1", 00:18:43.278 "uuid": "104383bd-c2d6-4834-a40f-709b7dcd4aa1", 00:18:43.278 "strip_size_kb": 0, 00:18:43.278 "state": "online", 00:18:43.278 "raid_level": "raid1", 00:18:43.278 "superblock": true, 00:18:43.278 "num_base_bdevs": 2, 00:18:43.278 "num_base_bdevs_discovered": 1, 00:18:43.278 "num_base_bdevs_operational": 1, 00:18:43.278 "base_bdevs_list": [ 00:18:43.278 { 00:18:43.278 "name": null, 00:18:43.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.278 "is_configured": false, 00:18:43.278 "data_offset": 0, 00:18:43.278 "data_size": 7936 00:18:43.278 }, 00:18:43.278 { 00:18:43.278 "name": "pt2", 00:18:43.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.278 "is_configured": true, 00:18:43.278 "data_offset": 256, 00:18:43.278 "data_size": 7936 00:18:43.278 } 00:18:43.278 ] 00:18:43.278 }' 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.278 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.845 [2024-11-29 07:50:33.497034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.845 [2024-11-29 07:50:33.497135] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.845 [2024-11-29 07:50:33.497208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.845 [2024-11-29 07:50:33.497266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.845 [2024-11-29 07:50:33.497338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:43.845 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 [2024-11-29 07:50:33.568923] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:43.846 [2024-11-29 07:50:33.569030] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.846 [2024-11-29 07:50:33.569048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:43.846 [2024-11-29 07:50:33.569058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.846 [2024-11-29 07:50:33.570962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.846 [2024-11-29 07:50:33.571005] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:43.846 [2024-11-29 07:50:33.571047] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:43.846 [2024-11-29 07:50:33.571093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:43.846 [2024-11-29 07:50:33.571166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:43.846 [2024-11-29 07:50:33.571179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:43.846 [2024-11-29 07:50:33.571261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:43.846 [2024-11-29 07:50:33.571321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:43.846 [2024-11-29 07:50:33.571329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:43.846 [2024-11-29 07:50:33.571382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.846 pt2 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.846 "name": "raid_bdev1", 00:18:43.846 "uuid": "104383bd-c2d6-4834-a40f-709b7dcd4aa1", 00:18:43.846 "strip_size_kb": 0, 00:18:43.846 "state": "online", 00:18:43.846 "raid_level": "raid1", 00:18:43.846 "superblock": true, 00:18:43.846 "num_base_bdevs": 2, 00:18:43.846 "num_base_bdevs_discovered": 1, 00:18:43.846 "num_base_bdevs_operational": 1, 00:18:43.846 "base_bdevs_list": [ 00:18:43.846 { 00:18:43.846 "name": null, 00:18:43.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.846 "is_configured": false, 00:18:43.846 "data_offset": 256, 00:18:43.846 "data_size": 7936 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "name": "pt2", 00:18:43.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.846 "is_configured": true, 00:18:43.846 "data_offset": 256, 00:18:43.846 "data_size": 7936 00:18:43.846 } 00:18:43.846 ] 00:18:43.846 }' 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.846 07:50:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.413 [2024-11-29 07:50:34.056042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.413 [2024-11-29 07:50:34.056129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.413 [2024-11-29 07:50:34.056213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.413 [2024-11-29 07:50:34.056269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.413 [2024-11-29 07:50:34.056301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.413 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.413 [2024-11-29 07:50:34.100033] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:44.413 [2024-11-29 07:50:34.100148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.413 [2024-11-29 07:50:34.100182] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:44.413 [2024-11-29 07:50:34.100221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.413 [2024-11-29 07:50:34.101996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.413 [2024-11-29 07:50:34.102079] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:44.413 [2024-11-29 07:50:34.102149] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:44.413 [2024-11-29 07:50:34.102220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:44.413 [2024-11-29 07:50:34.102321] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:44.413 [2024-11-29 07:50:34.102378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.413 [2024-11-29 07:50:34.102415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:44.413 [2024-11-29 07:50:34.102511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:44.413 [2024-11-29 07:50:34.102612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:44.413 [2024-11-29 07:50:34.102648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:44.413 [2024-11-29 07:50:34.102725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:44.413 [2024-11-29 07:50:34.102811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:44.414 [2024-11-29 07:50:34.102852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:44.414 [2024-11-29 07:50:34.102946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.414 pt1 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.414 "name": "raid_bdev1", 00:18:44.414 "uuid": "104383bd-c2d6-4834-a40f-709b7dcd4aa1", 00:18:44.414 "strip_size_kb": 0, 00:18:44.414 "state": "online", 00:18:44.414 "raid_level": "raid1", 00:18:44.414 "superblock": true, 00:18:44.414 "num_base_bdevs": 2, 00:18:44.414 "num_base_bdevs_discovered": 1, 00:18:44.414 "num_base_bdevs_operational": 1, 00:18:44.414 "base_bdevs_list": [ 00:18:44.414 { 00:18:44.414 "name": null, 00:18:44.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.414 "is_configured": false, 00:18:44.414 "data_offset": 256, 00:18:44.414 "data_size": 7936 00:18:44.414 }, 00:18:44.414 { 00:18:44.414 "name": "pt2", 00:18:44.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.414 "is_configured": true, 00:18:44.414 "data_offset": 256, 00:18:44.414 "data_size": 7936 00:18:44.414 } 00:18:44.414 ] 00:18:44.414 }' 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.414 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.673 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:44.673 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:44.673 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.673 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.673 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.673 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:44.673 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.673 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:44.673 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.673 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.673 [2024-11-29 07:50:34.595498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.673 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.932 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 104383bd-c2d6-4834-a40f-709b7dcd4aa1 '!=' 104383bd-c2d6-4834-a40f-709b7dcd4aa1 ']' 00:18:44.932 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88369 00:18:44.932 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88369 ']' 00:18:44.932 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88369 00:18:44.932 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:44.932 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.932 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88369 00:18:44.932 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:44.932 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:44.932 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88369' 00:18:44.932 killing process with pid 88369 00:18:44.932 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88369 00:18:44.932 [2024-11-29 07:50:34.676392] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.932 [2024-11-29 07:50:34.676458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.932 [2024-11-29 07:50:34.676497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.932 [2024-11-29 07:50:34.676509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:44.932 07:50:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88369 00:18:44.932 [2024-11-29 07:50:34.867324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:46.310 07:50:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:46.310 00:18:46.310 real 0m6.013s 00:18:46.310 user 0m9.106s 00:18:46.310 sys 0m1.133s 00:18:46.310 07:50:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.310 ************************************ 00:18:46.310 END TEST raid_superblock_test_md_interleaved 00:18:46.310 ************************************ 00:18:46.310 07:50:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.310 07:50:35 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:46.310 07:50:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:46.310 07:50:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.310 07:50:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.310 ************************************ 00:18:46.310 START TEST raid_rebuild_test_sb_md_interleaved 00:18:46.310 ************************************ 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88695 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88695 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88695 ']' 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.310 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.310 [2024-11-29 07:50:36.103110] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:46.310 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:46.310 Zero copy mechanism will not be used. 00:18:46.310 [2024-11-29 07:50:36.103288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88695 ] 00:18:46.570 [2024-11-29 07:50:36.276716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.570 [2024-11-29 07:50:36.382768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.828 [2024-11-29 07:50:36.573432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.828 [2024-11-29 07:50:36.573566] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.087 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.087 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:47.087 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.087 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:47.087 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.087 07:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.087 BaseBdev1_malloc 00:18:47.087 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.087 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:47.087 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.087 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.087 [2024-11-29 07:50:37.013233] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:47.087 [2024-11-29 07:50:37.013300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.087 [2024-11-29 07:50:37.013321] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:47.087 [2024-11-29 07:50:37.013332] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.087 [2024-11-29 07:50:37.015139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.087 [2024-11-29 07:50:37.015262] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:47.087 BaseBdev1 00:18:47.087 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.087 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.087 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:47.087 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.087 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.346 BaseBdev2_malloc 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.346 [2024-11-29 07:50:37.066977] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:47.346 [2024-11-29 07:50:37.067038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.346 [2024-11-29 07:50:37.067054] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:47.346 [2024-11-29 07:50:37.067065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.346 [2024-11-29 07:50:37.068799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.346 [2024-11-29 07:50:37.068836] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:47.346 BaseBdev2 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.346 spare_malloc 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.346 spare_delay 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.346 [2024-11-29 07:50:37.165514] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:47.346 [2024-11-29 07:50:37.165572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.346 [2024-11-29 07:50:37.165591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:47.346 [2024-11-29 07:50:37.165602] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.346 [2024-11-29 07:50:37.167383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.346 [2024-11-29 07:50:37.167422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:47.346 spare 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.346 [2024-11-29 07:50:37.177521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.346 [2024-11-29 07:50:37.179243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.346 [2024-11-29 07:50:37.179429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:47.346 [2024-11-29 07:50:37.179444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:47.346 [2024-11-29 07:50:37.179515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:47.346 [2024-11-29 07:50:37.179581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:47.346 [2024-11-29 07:50:37.179589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:47.346 [2024-11-29 07:50:37.179650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.346 "name": "raid_bdev1", 00:18:47.346 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:47.346 "strip_size_kb": 0, 00:18:47.346 "state": "online", 00:18:47.346 "raid_level": "raid1", 00:18:47.346 "superblock": true, 00:18:47.346 "num_base_bdevs": 2, 00:18:47.346 "num_base_bdevs_discovered": 2, 00:18:47.346 "num_base_bdevs_operational": 2, 00:18:47.346 "base_bdevs_list": [ 00:18:47.346 { 00:18:47.346 "name": "BaseBdev1", 00:18:47.346 "uuid": "974c32ee-259c-5e37-bd95-4ffdb425b58a", 00:18:47.346 "is_configured": true, 00:18:47.346 "data_offset": 256, 00:18:47.346 "data_size": 7936 00:18:47.346 }, 00:18:47.346 { 00:18:47.346 "name": "BaseBdev2", 00:18:47.346 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:47.346 "is_configured": true, 00:18:47.346 "data_offset": 256, 00:18:47.346 "data_size": 7936 00:18:47.346 } 00:18:47.346 ] 00:18:47.346 }' 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.346 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.913 [2024-11-29 07:50:37.608993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.913 [2024-11-29 07:50:37.708558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.913 "name": "raid_bdev1", 00:18:47.913 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:47.913 "strip_size_kb": 0, 00:18:47.913 "state": "online", 00:18:47.913 "raid_level": "raid1", 00:18:47.913 "superblock": true, 00:18:47.913 "num_base_bdevs": 2, 00:18:47.913 "num_base_bdevs_discovered": 1, 00:18:47.913 "num_base_bdevs_operational": 1, 00:18:47.913 "base_bdevs_list": [ 00:18:47.913 { 00:18:47.913 "name": null, 00:18:47.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.913 "is_configured": false, 00:18:47.913 "data_offset": 0, 00:18:47.913 "data_size": 7936 00:18:47.913 }, 00:18:47.913 { 00:18:47.913 "name": "BaseBdev2", 00:18:47.913 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:47.913 "is_configured": true, 00:18:47.913 "data_offset": 256, 00:18:47.913 "data_size": 7936 00:18:47.913 } 00:18:47.913 ] 00:18:47.913 }' 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.913 07:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.479 07:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:48.479 07:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.479 07:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.479 [2024-11-29 07:50:38.155837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.479 [2024-11-29 07:50:38.171449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:48.479 07:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.479 07:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:48.479 [2024-11-29 07:50:38.173272] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.410 "name": "raid_bdev1", 00:18:49.410 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:49.410 "strip_size_kb": 0, 00:18:49.410 "state": "online", 00:18:49.410 "raid_level": "raid1", 00:18:49.410 "superblock": true, 00:18:49.410 "num_base_bdevs": 2, 00:18:49.410 "num_base_bdevs_discovered": 2, 00:18:49.410 "num_base_bdevs_operational": 2, 00:18:49.410 "process": { 00:18:49.410 "type": "rebuild", 00:18:49.410 "target": "spare", 00:18:49.410 "progress": { 00:18:49.410 "blocks": 2560, 00:18:49.410 "percent": 32 00:18:49.410 } 00:18:49.410 }, 00:18:49.410 "base_bdevs_list": [ 00:18:49.410 { 00:18:49.410 "name": "spare", 00:18:49.410 "uuid": "eb8fb41f-334f-548a-a377-ab8b8fda3442", 00:18:49.410 "is_configured": true, 00:18:49.410 "data_offset": 256, 00:18:49.410 "data_size": 7936 00:18:49.410 }, 00:18:49.410 { 00:18:49.410 "name": "BaseBdev2", 00:18:49.410 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:49.410 "is_configured": true, 00:18:49.410 "data_offset": 256, 00:18:49.410 "data_size": 7936 00:18:49.410 } 00:18:49.410 ] 00:18:49.410 }' 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.410 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:49.411 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.411 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.411 [2024-11-29 07:50:39.337042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.668 [2024-11-29 07:50:39.377974] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:49.668 [2024-11-29 07:50:39.378072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.668 [2024-11-29 07:50:39.378088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.668 [2024-11-29 07:50:39.378111] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:49.668 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.668 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.668 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.668 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.669 "name": "raid_bdev1", 00:18:49.669 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:49.669 "strip_size_kb": 0, 00:18:49.669 "state": "online", 00:18:49.669 "raid_level": "raid1", 00:18:49.669 "superblock": true, 00:18:49.669 "num_base_bdevs": 2, 00:18:49.669 "num_base_bdevs_discovered": 1, 00:18:49.669 "num_base_bdevs_operational": 1, 00:18:49.669 "base_bdevs_list": [ 00:18:49.669 { 00:18:49.669 "name": null, 00:18:49.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.669 "is_configured": false, 00:18:49.669 "data_offset": 0, 00:18:49.669 "data_size": 7936 00:18:49.669 }, 00:18:49.669 { 00:18:49.669 "name": "BaseBdev2", 00:18:49.669 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:49.669 "is_configured": true, 00:18:49.669 "data_offset": 256, 00:18:49.669 "data_size": 7936 00:18:49.669 } 00:18:49.669 ] 00:18:49.669 }' 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.669 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.235 "name": "raid_bdev1", 00:18:50.235 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:50.235 "strip_size_kb": 0, 00:18:50.235 "state": "online", 00:18:50.235 "raid_level": "raid1", 00:18:50.235 "superblock": true, 00:18:50.235 "num_base_bdevs": 2, 00:18:50.235 "num_base_bdevs_discovered": 1, 00:18:50.235 "num_base_bdevs_operational": 1, 00:18:50.235 "base_bdevs_list": [ 00:18:50.235 { 00:18:50.235 "name": null, 00:18:50.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.235 "is_configured": false, 00:18:50.235 "data_offset": 0, 00:18:50.235 "data_size": 7936 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "name": "BaseBdev2", 00:18:50.235 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:50.235 "is_configured": true, 00:18:50.235 "data_offset": 256, 00:18:50.235 "data_size": 7936 00:18:50.235 } 00:18:50.235 ] 00:18:50.235 }' 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.235 07:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.235 [2024-11-29 07:50:39.994725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.235 [2024-11-29 07:50:40.009652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:50.235 07:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.235 07:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:50.235 [2024-11-29 07:50:40.011485] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.170 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.170 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.170 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.170 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.170 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.170 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.170 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.170 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.170 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.170 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.170 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.170 "name": "raid_bdev1", 00:18:51.170 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:51.170 "strip_size_kb": 0, 00:18:51.170 "state": "online", 00:18:51.170 "raid_level": "raid1", 00:18:51.170 "superblock": true, 00:18:51.170 "num_base_bdevs": 2, 00:18:51.170 "num_base_bdevs_discovered": 2, 00:18:51.170 "num_base_bdevs_operational": 2, 00:18:51.170 "process": { 00:18:51.170 "type": "rebuild", 00:18:51.170 "target": "spare", 00:18:51.170 "progress": { 00:18:51.170 "blocks": 2560, 00:18:51.170 "percent": 32 00:18:51.170 } 00:18:51.170 }, 00:18:51.170 "base_bdevs_list": [ 00:18:51.170 { 00:18:51.170 "name": "spare", 00:18:51.170 "uuid": "eb8fb41f-334f-548a-a377-ab8b8fda3442", 00:18:51.170 "is_configured": true, 00:18:51.170 "data_offset": 256, 00:18:51.170 "data_size": 7936 00:18:51.170 }, 00:18:51.170 { 00:18:51.170 "name": "BaseBdev2", 00:18:51.170 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:51.170 "is_configured": true, 00:18:51.170 "data_offset": 256, 00:18:51.170 "data_size": 7936 00:18:51.170 } 00:18:51.170 ] 00:18:51.170 }' 00:18:51.170 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:51.430 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=720 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.430 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.431 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.431 "name": "raid_bdev1", 00:18:51.431 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:51.431 "strip_size_kb": 0, 00:18:51.431 "state": "online", 00:18:51.431 "raid_level": "raid1", 00:18:51.431 "superblock": true, 00:18:51.431 "num_base_bdevs": 2, 00:18:51.431 "num_base_bdevs_discovered": 2, 00:18:51.431 "num_base_bdevs_operational": 2, 00:18:51.431 "process": { 00:18:51.431 "type": "rebuild", 00:18:51.431 "target": "spare", 00:18:51.431 "progress": { 00:18:51.431 "blocks": 2816, 00:18:51.431 "percent": 35 00:18:51.431 } 00:18:51.431 }, 00:18:51.431 "base_bdevs_list": [ 00:18:51.431 { 00:18:51.431 "name": "spare", 00:18:51.431 "uuid": "eb8fb41f-334f-548a-a377-ab8b8fda3442", 00:18:51.431 "is_configured": true, 00:18:51.431 "data_offset": 256, 00:18:51.431 "data_size": 7936 00:18:51.431 }, 00:18:51.431 { 00:18:51.431 "name": "BaseBdev2", 00:18:51.431 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:51.431 "is_configured": true, 00:18:51.431 "data_offset": 256, 00:18:51.431 "data_size": 7936 00:18:51.431 } 00:18:51.431 ] 00:18:51.431 }' 00:18:51.431 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.431 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.431 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.431 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.431 07:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:52.366 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.366 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.366 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.366 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.366 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.366 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.366 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.366 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.366 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.366 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.625 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.625 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.625 "name": "raid_bdev1", 00:18:52.625 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:52.625 "strip_size_kb": 0, 00:18:52.625 "state": "online", 00:18:52.625 "raid_level": "raid1", 00:18:52.625 "superblock": true, 00:18:52.625 "num_base_bdevs": 2, 00:18:52.625 "num_base_bdevs_discovered": 2, 00:18:52.625 "num_base_bdevs_operational": 2, 00:18:52.625 "process": { 00:18:52.625 "type": "rebuild", 00:18:52.625 "target": "spare", 00:18:52.625 "progress": { 00:18:52.625 "blocks": 5632, 00:18:52.625 "percent": 70 00:18:52.625 } 00:18:52.625 }, 00:18:52.625 "base_bdevs_list": [ 00:18:52.625 { 00:18:52.625 "name": "spare", 00:18:52.625 "uuid": "eb8fb41f-334f-548a-a377-ab8b8fda3442", 00:18:52.625 "is_configured": true, 00:18:52.625 "data_offset": 256, 00:18:52.625 "data_size": 7936 00:18:52.625 }, 00:18:52.625 { 00:18:52.625 "name": "BaseBdev2", 00:18:52.625 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:52.625 "is_configured": true, 00:18:52.625 "data_offset": 256, 00:18:52.625 "data_size": 7936 00:18:52.625 } 00:18:52.625 ] 00:18:52.625 }' 00:18:52.625 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.625 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.625 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.625 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.625 07:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:53.193 [2024-11-29 07:50:43.123068] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:53.193 [2024-11-29 07:50:43.123161] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:53.193 [2024-11-29 07:50:43.123265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.761 "name": "raid_bdev1", 00:18:53.761 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:53.761 "strip_size_kb": 0, 00:18:53.761 "state": "online", 00:18:53.761 "raid_level": "raid1", 00:18:53.761 "superblock": true, 00:18:53.761 "num_base_bdevs": 2, 00:18:53.761 "num_base_bdevs_discovered": 2, 00:18:53.761 "num_base_bdevs_operational": 2, 00:18:53.761 "base_bdevs_list": [ 00:18:53.761 { 00:18:53.761 "name": "spare", 00:18:53.761 "uuid": "eb8fb41f-334f-548a-a377-ab8b8fda3442", 00:18:53.761 "is_configured": true, 00:18:53.761 "data_offset": 256, 00:18:53.761 "data_size": 7936 00:18:53.761 }, 00:18:53.761 { 00:18:53.761 "name": "BaseBdev2", 00:18:53.761 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:53.761 "is_configured": true, 00:18:53.761 "data_offset": 256, 00:18:53.761 "data_size": 7936 00:18:53.761 } 00:18:53.761 ] 00:18:53.761 }' 00:18:53.761 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.762 "name": "raid_bdev1", 00:18:53.762 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:53.762 "strip_size_kb": 0, 00:18:53.762 "state": "online", 00:18:53.762 "raid_level": "raid1", 00:18:53.762 "superblock": true, 00:18:53.762 "num_base_bdevs": 2, 00:18:53.762 "num_base_bdevs_discovered": 2, 00:18:53.762 "num_base_bdevs_operational": 2, 00:18:53.762 "base_bdevs_list": [ 00:18:53.762 { 00:18:53.762 "name": "spare", 00:18:53.762 "uuid": "eb8fb41f-334f-548a-a377-ab8b8fda3442", 00:18:53.762 "is_configured": true, 00:18:53.762 "data_offset": 256, 00:18:53.762 "data_size": 7936 00:18:53.762 }, 00:18:53.762 { 00:18:53.762 "name": "BaseBdev2", 00:18:53.762 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:53.762 "is_configured": true, 00:18:53.762 "data_offset": 256, 00:18:53.762 "data_size": 7936 00:18:53.762 } 00:18:53.762 ] 00:18:53.762 }' 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.762 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.020 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.020 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:54.020 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.020 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.021 "name": "raid_bdev1", 00:18:54.021 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:54.021 "strip_size_kb": 0, 00:18:54.021 "state": "online", 00:18:54.021 "raid_level": "raid1", 00:18:54.021 "superblock": true, 00:18:54.021 "num_base_bdevs": 2, 00:18:54.021 "num_base_bdevs_discovered": 2, 00:18:54.021 "num_base_bdevs_operational": 2, 00:18:54.021 "base_bdevs_list": [ 00:18:54.021 { 00:18:54.021 "name": "spare", 00:18:54.021 "uuid": "eb8fb41f-334f-548a-a377-ab8b8fda3442", 00:18:54.021 "is_configured": true, 00:18:54.021 "data_offset": 256, 00:18:54.021 "data_size": 7936 00:18:54.021 }, 00:18:54.021 { 00:18:54.021 "name": "BaseBdev2", 00:18:54.021 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:54.021 "is_configured": true, 00:18:54.021 "data_offset": 256, 00:18:54.021 "data_size": 7936 00:18:54.021 } 00:18:54.021 ] 00:18:54.021 }' 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.021 07:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.280 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:54.280 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.280 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.280 [2024-11-29 07:50:44.198702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:54.280 [2024-11-29 07:50:44.198791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:54.280 [2024-11-29 07:50:44.198889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.280 [2024-11-29 07:50:44.198982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.280 [2024-11-29 07:50:44.199051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:54.280 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.280 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.280 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:54.280 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.280 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.280 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.539 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:54.539 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.540 [2024-11-29 07:50:44.274563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:54.540 [2024-11-29 07:50:44.274616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.540 [2024-11-29 07:50:44.274636] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:54.540 [2024-11-29 07:50:44.274644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.540 [2024-11-29 07:50:44.276595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.540 [2024-11-29 07:50:44.276631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:54.540 [2024-11-29 07:50:44.276691] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:54.540 [2024-11-29 07:50:44.276735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:54.540 [2024-11-29 07:50:44.276839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.540 spare 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.540 [2024-11-29 07:50:44.376730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:54.540 [2024-11-29 07:50:44.376759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:54.540 [2024-11-29 07:50:44.376840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:54.540 [2024-11-29 07:50:44.376913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:54.540 [2024-11-29 07:50:44.376923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:54.540 [2024-11-29 07:50:44.376991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.540 "name": "raid_bdev1", 00:18:54.540 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:54.540 "strip_size_kb": 0, 00:18:54.540 "state": "online", 00:18:54.540 "raid_level": "raid1", 00:18:54.540 "superblock": true, 00:18:54.540 "num_base_bdevs": 2, 00:18:54.540 "num_base_bdevs_discovered": 2, 00:18:54.540 "num_base_bdevs_operational": 2, 00:18:54.540 "base_bdevs_list": [ 00:18:54.540 { 00:18:54.540 "name": "spare", 00:18:54.540 "uuid": "eb8fb41f-334f-548a-a377-ab8b8fda3442", 00:18:54.540 "is_configured": true, 00:18:54.540 "data_offset": 256, 00:18:54.540 "data_size": 7936 00:18:54.540 }, 00:18:54.540 { 00:18:54.540 "name": "BaseBdev2", 00:18:54.540 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:54.540 "is_configured": true, 00:18:54.540 "data_offset": 256, 00:18:54.540 "data_size": 7936 00:18:54.540 } 00:18:54.540 ] 00:18:54.540 }' 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.540 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.108 "name": "raid_bdev1", 00:18:55.108 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:55.108 "strip_size_kb": 0, 00:18:55.108 "state": "online", 00:18:55.108 "raid_level": "raid1", 00:18:55.108 "superblock": true, 00:18:55.108 "num_base_bdevs": 2, 00:18:55.108 "num_base_bdevs_discovered": 2, 00:18:55.108 "num_base_bdevs_operational": 2, 00:18:55.108 "base_bdevs_list": [ 00:18:55.108 { 00:18:55.108 "name": "spare", 00:18:55.108 "uuid": "eb8fb41f-334f-548a-a377-ab8b8fda3442", 00:18:55.108 "is_configured": true, 00:18:55.108 "data_offset": 256, 00:18:55.108 "data_size": 7936 00:18:55.108 }, 00:18:55.108 { 00:18:55.108 "name": "BaseBdev2", 00:18:55.108 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:55.108 "is_configured": true, 00:18:55.108 "data_offset": 256, 00:18:55.108 "data_size": 7936 00:18:55.108 } 00:18:55.108 ] 00:18:55.108 }' 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.108 07:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.108 [2024-11-29 07:50:45.001385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.108 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.368 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.368 "name": "raid_bdev1", 00:18:55.368 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:55.368 "strip_size_kb": 0, 00:18:55.368 "state": "online", 00:18:55.368 "raid_level": "raid1", 00:18:55.368 "superblock": true, 00:18:55.368 "num_base_bdevs": 2, 00:18:55.368 "num_base_bdevs_discovered": 1, 00:18:55.368 "num_base_bdevs_operational": 1, 00:18:55.368 "base_bdevs_list": [ 00:18:55.368 { 00:18:55.368 "name": null, 00:18:55.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.368 "is_configured": false, 00:18:55.368 "data_offset": 0, 00:18:55.368 "data_size": 7936 00:18:55.368 }, 00:18:55.368 { 00:18:55.368 "name": "BaseBdev2", 00:18:55.368 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:55.368 "is_configured": true, 00:18:55.368 "data_offset": 256, 00:18:55.368 "data_size": 7936 00:18:55.368 } 00:18:55.368 ] 00:18:55.368 }' 00:18:55.368 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.368 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.634 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:55.634 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.634 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.634 [2024-11-29 07:50:45.476655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.634 [2024-11-29 07:50:45.476841] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:55.634 [2024-11-29 07:50:45.476926] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:55.634 [2024-11-29 07:50:45.476981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.634 [2024-11-29 07:50:45.491696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:55.634 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.634 07:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:55.634 [2024-11-29 07:50:45.493556] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:56.612 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.612 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.612 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.612 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.612 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.612 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.612 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.612 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.612 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.612 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.612 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.612 "name": "raid_bdev1", 00:18:56.612 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:56.612 "strip_size_kb": 0, 00:18:56.612 "state": "online", 00:18:56.612 "raid_level": "raid1", 00:18:56.612 "superblock": true, 00:18:56.612 "num_base_bdevs": 2, 00:18:56.612 "num_base_bdevs_discovered": 2, 00:18:56.612 "num_base_bdevs_operational": 2, 00:18:56.612 "process": { 00:18:56.612 "type": "rebuild", 00:18:56.612 "target": "spare", 00:18:56.612 "progress": { 00:18:56.612 "blocks": 2560, 00:18:56.612 "percent": 32 00:18:56.612 } 00:18:56.612 }, 00:18:56.612 "base_bdevs_list": [ 00:18:56.612 { 00:18:56.612 "name": "spare", 00:18:56.612 "uuid": "eb8fb41f-334f-548a-a377-ab8b8fda3442", 00:18:56.612 "is_configured": true, 00:18:56.612 "data_offset": 256, 00:18:56.612 "data_size": 7936 00:18:56.612 }, 00:18:56.612 { 00:18:56.612 "name": "BaseBdev2", 00:18:56.612 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:56.612 "is_configured": true, 00:18:56.612 "data_offset": 256, 00:18:56.612 "data_size": 7936 00:18:56.612 } 00:18:56.612 ] 00:18:56.612 }' 00:18:56.612 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 [2024-11-29 07:50:46.661358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.872 [2024-11-29 07:50:46.698270] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:56.872 [2024-11-29 07:50:46.698325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.872 [2024-11-29 07:50:46.698338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.872 [2024-11-29 07:50:46.698346] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.872 "name": "raid_bdev1", 00:18:56.872 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:56.872 "strip_size_kb": 0, 00:18:56.872 "state": "online", 00:18:56.872 "raid_level": "raid1", 00:18:56.872 "superblock": true, 00:18:56.872 "num_base_bdevs": 2, 00:18:56.872 "num_base_bdevs_discovered": 1, 00:18:56.872 "num_base_bdevs_operational": 1, 00:18:56.872 "base_bdevs_list": [ 00:18:56.872 { 00:18:56.872 "name": null, 00:18:56.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.872 "is_configured": false, 00:18:56.872 "data_offset": 0, 00:18:56.872 "data_size": 7936 00:18:56.872 }, 00:18:56.872 { 00:18:56.872 "name": "BaseBdev2", 00:18:56.872 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:56.872 "is_configured": true, 00:18:56.872 "data_offset": 256, 00:18:56.872 "data_size": 7936 00:18:56.872 } 00:18:56.872 ] 00:18:56.872 }' 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.872 07:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.440 07:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:57.440 07:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.440 07:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.440 [2024-11-29 07:50:47.222870] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:57.440 [2024-11-29 07:50:47.222988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.440 [2024-11-29 07:50:47.223030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:57.440 [2024-11-29 07:50:47.223060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.440 [2024-11-29 07:50:47.223293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.440 [2024-11-29 07:50:47.223347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:57.440 [2024-11-29 07:50:47.223420] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:57.440 [2024-11-29 07:50:47.223456] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:57.440 [2024-11-29 07:50:47.223495] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:57.440 [2024-11-29 07:50:47.223562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:57.440 [2024-11-29 07:50:47.238343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:57.440 spare 00:18:57.440 07:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.440 [2024-11-29 07:50:47.240259] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:57.440 07:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:58.377 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.377 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.377 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.377 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.377 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.377 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.377 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.377 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.377 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.377 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.377 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.377 "name": "raid_bdev1", 00:18:58.377 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:58.377 "strip_size_kb": 0, 00:18:58.377 "state": "online", 00:18:58.377 "raid_level": "raid1", 00:18:58.377 "superblock": true, 00:18:58.377 "num_base_bdevs": 2, 00:18:58.377 "num_base_bdevs_discovered": 2, 00:18:58.377 "num_base_bdevs_operational": 2, 00:18:58.377 "process": { 00:18:58.377 "type": "rebuild", 00:18:58.377 "target": "spare", 00:18:58.377 "progress": { 00:18:58.377 "blocks": 2560, 00:18:58.377 "percent": 32 00:18:58.377 } 00:18:58.377 }, 00:18:58.377 "base_bdevs_list": [ 00:18:58.377 { 00:18:58.377 "name": "spare", 00:18:58.377 "uuid": "eb8fb41f-334f-548a-a377-ab8b8fda3442", 00:18:58.377 "is_configured": true, 00:18:58.377 "data_offset": 256, 00:18:58.377 "data_size": 7936 00:18:58.377 }, 00:18:58.377 { 00:18:58.377 "name": "BaseBdev2", 00:18:58.377 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:58.377 "is_configured": true, 00:18:58.377 "data_offset": 256, 00:18:58.377 "data_size": 7936 00:18:58.377 } 00:18:58.377 ] 00:18:58.377 }' 00:18:58.377 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.636 [2024-11-29 07:50:48.408504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:58.636 [2024-11-29 07:50:48.444904] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:58.636 [2024-11-29 07:50:48.444954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.636 [2024-11-29 07:50:48.444971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:58.636 [2024-11-29 07:50:48.444977] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.636 "name": "raid_bdev1", 00:18:58.636 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:58.636 "strip_size_kb": 0, 00:18:58.636 "state": "online", 00:18:58.636 "raid_level": "raid1", 00:18:58.636 "superblock": true, 00:18:58.636 "num_base_bdevs": 2, 00:18:58.636 "num_base_bdevs_discovered": 1, 00:18:58.636 "num_base_bdevs_operational": 1, 00:18:58.636 "base_bdevs_list": [ 00:18:58.636 { 00:18:58.636 "name": null, 00:18:58.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.636 "is_configured": false, 00:18:58.636 "data_offset": 0, 00:18:58.636 "data_size": 7936 00:18:58.636 }, 00:18:58.636 { 00:18:58.636 "name": "BaseBdev2", 00:18:58.636 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:58.636 "is_configured": true, 00:18:58.636 "data_offset": 256, 00:18:58.636 "data_size": 7936 00:18:58.636 } 00:18:58.636 ] 00:18:58.636 }' 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.636 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.204 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:59.204 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.204 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:59.204 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:59.204 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.204 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.204 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.204 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.204 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.204 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.204 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.204 "name": "raid_bdev1", 00:18:59.204 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:18:59.204 "strip_size_kb": 0, 00:18:59.204 "state": "online", 00:18:59.204 "raid_level": "raid1", 00:18:59.204 "superblock": true, 00:18:59.204 "num_base_bdevs": 2, 00:18:59.204 "num_base_bdevs_discovered": 1, 00:18:59.204 "num_base_bdevs_operational": 1, 00:18:59.204 "base_bdevs_list": [ 00:18:59.204 { 00:18:59.204 "name": null, 00:18:59.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.204 "is_configured": false, 00:18:59.204 "data_offset": 0, 00:18:59.204 "data_size": 7936 00:18:59.204 }, 00:18:59.204 { 00:18:59.204 "name": "BaseBdev2", 00:18:59.204 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:18:59.204 "is_configured": true, 00:18:59.204 "data_offset": 256, 00:18:59.204 "data_size": 7936 00:18:59.204 } 00:18:59.204 ] 00:18:59.204 }' 00:18:59.204 07:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.204 07:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:59.204 07:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.204 07:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:59.204 07:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:59.204 07:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.204 07:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.204 07:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.204 07:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:59.204 07:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.205 07:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.205 [2024-11-29 07:50:49.104751] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:59.205 [2024-11-29 07:50:49.104807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.205 [2024-11-29 07:50:49.104827] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:59.205 [2024-11-29 07:50:49.104836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.205 [2024-11-29 07:50:49.104994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.205 [2024-11-29 07:50:49.105007] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:59.205 [2024-11-29 07:50:49.105052] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:59.205 [2024-11-29 07:50:49.105064] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:59.205 [2024-11-29 07:50:49.105073] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:59.205 [2024-11-29 07:50:49.105083] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:59.205 BaseBdev1 00:18:59.205 07:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.205 07:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:00.581 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:00.581 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.581 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.581 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.581 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.581 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:00.581 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.581 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.581 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.581 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.581 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.582 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.582 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.582 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.582 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.582 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.582 "name": "raid_bdev1", 00:19:00.582 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:19:00.582 "strip_size_kb": 0, 00:19:00.582 "state": "online", 00:19:00.582 "raid_level": "raid1", 00:19:00.582 "superblock": true, 00:19:00.582 "num_base_bdevs": 2, 00:19:00.582 "num_base_bdevs_discovered": 1, 00:19:00.582 "num_base_bdevs_operational": 1, 00:19:00.582 "base_bdevs_list": [ 00:19:00.582 { 00:19:00.582 "name": null, 00:19:00.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.582 "is_configured": false, 00:19:00.582 "data_offset": 0, 00:19:00.582 "data_size": 7936 00:19:00.582 }, 00:19:00.582 { 00:19:00.582 "name": "BaseBdev2", 00:19:00.582 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:19:00.582 "is_configured": true, 00:19:00.582 "data_offset": 256, 00:19:00.582 "data_size": 7936 00:19:00.582 } 00:19:00.582 ] 00:19:00.582 }' 00:19:00.582 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.582 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.840 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:00.840 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.840 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:00.840 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:00.840 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.840 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.840 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.840 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.840 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.840 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.840 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.840 "name": "raid_bdev1", 00:19:00.840 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:19:00.840 "strip_size_kb": 0, 00:19:00.840 "state": "online", 00:19:00.840 "raid_level": "raid1", 00:19:00.840 "superblock": true, 00:19:00.840 "num_base_bdevs": 2, 00:19:00.840 "num_base_bdevs_discovered": 1, 00:19:00.840 "num_base_bdevs_operational": 1, 00:19:00.840 "base_bdevs_list": [ 00:19:00.840 { 00:19:00.840 "name": null, 00:19:00.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.841 "is_configured": false, 00:19:00.841 "data_offset": 0, 00:19:00.841 "data_size": 7936 00:19:00.841 }, 00:19:00.841 { 00:19:00.841 "name": "BaseBdev2", 00:19:00.841 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:19:00.841 "is_configured": true, 00:19:00.841 "data_offset": 256, 00:19:00.841 "data_size": 7936 00:19:00.841 } 00:19:00.841 ] 00:19:00.841 }' 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.841 [2024-11-29 07:50:50.690040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.841 [2024-11-29 07:50:50.690234] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:00.841 [2024-11-29 07:50:50.690257] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:00.841 request: 00:19:00.841 { 00:19:00.841 "base_bdev": "BaseBdev1", 00:19:00.841 "raid_bdev": "raid_bdev1", 00:19:00.841 "method": "bdev_raid_add_base_bdev", 00:19:00.841 "req_id": 1 00:19:00.841 } 00:19:00.841 Got JSON-RPC error response 00:19:00.841 response: 00:19:00.841 { 00:19:00.841 "code": -22, 00:19:00.841 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:00.841 } 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:00.841 07:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.784 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.044 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.044 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.044 "name": "raid_bdev1", 00:19:02.044 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:19:02.044 "strip_size_kb": 0, 00:19:02.044 "state": "online", 00:19:02.044 "raid_level": "raid1", 00:19:02.044 "superblock": true, 00:19:02.044 "num_base_bdevs": 2, 00:19:02.044 "num_base_bdevs_discovered": 1, 00:19:02.044 "num_base_bdevs_operational": 1, 00:19:02.044 "base_bdevs_list": [ 00:19:02.044 { 00:19:02.044 "name": null, 00:19:02.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.044 "is_configured": false, 00:19:02.044 "data_offset": 0, 00:19:02.044 "data_size": 7936 00:19:02.044 }, 00:19:02.044 { 00:19:02.044 "name": "BaseBdev2", 00:19:02.044 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:19:02.044 "is_configured": true, 00:19:02.044 "data_offset": 256, 00:19:02.044 "data_size": 7936 00:19:02.044 } 00:19:02.044 ] 00:19:02.044 }' 00:19:02.044 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.044 07:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.304 "name": "raid_bdev1", 00:19:02.304 "uuid": "5335c93e-8c58-417d-b979-e6c67d22374a", 00:19:02.304 "strip_size_kb": 0, 00:19:02.304 "state": "online", 00:19:02.304 "raid_level": "raid1", 00:19:02.304 "superblock": true, 00:19:02.304 "num_base_bdevs": 2, 00:19:02.304 "num_base_bdevs_discovered": 1, 00:19:02.304 "num_base_bdevs_operational": 1, 00:19:02.304 "base_bdevs_list": [ 00:19:02.304 { 00:19:02.304 "name": null, 00:19:02.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.304 "is_configured": false, 00:19:02.304 "data_offset": 0, 00:19:02.304 "data_size": 7936 00:19:02.304 }, 00:19:02.304 { 00:19:02.304 "name": "BaseBdev2", 00:19:02.304 "uuid": "bf5a9089-749c-59a0-ba4f-512f953d81bd", 00:19:02.304 "is_configured": true, 00:19:02.304 "data_offset": 256, 00:19:02.304 "data_size": 7936 00:19:02.304 } 00:19:02.304 ] 00:19:02.304 }' 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.304 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:02.564 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.564 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:02.564 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88695 00:19:02.564 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88695 ']' 00:19:02.565 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88695 00:19:02.565 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:02.565 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.565 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88695 00:19:02.565 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.565 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.565 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88695' 00:19:02.565 killing process with pid 88695 00:19:02.565 Received shutdown signal, test time was about 60.000000 seconds 00:19:02.565 00:19:02.565 Latency(us) 00:19:02.565 [2024-11-29T07:50:52.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.565 [2024-11-29T07:50:52.510Z] =================================================================================================================== 00:19:02.565 [2024-11-29T07:50:52.510Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:02.565 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88695 00:19:02.565 [2024-11-29 07:50:52.324612] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:02.565 07:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88695 00:19:02.565 [2024-11-29 07:50:52.324759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.565 [2024-11-29 07:50:52.324817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.565 [2024-11-29 07:50:52.324830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:02.824 [2024-11-29 07:50:52.635729] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.207 07:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:04.207 00:19:04.207 real 0m17.792s 00:19:04.207 user 0m23.390s 00:19:04.207 sys 0m1.693s 00:19:04.207 07:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.207 07:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.207 ************************************ 00:19:04.207 END TEST raid_rebuild_test_sb_md_interleaved 00:19:04.207 ************************************ 00:19:04.207 07:50:53 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:04.207 07:50:53 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:04.207 07:50:53 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88695 ']' 00:19:04.207 07:50:53 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88695 00:19:04.207 07:50:53 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:04.207 ************************************ 00:19:04.207 END TEST bdev_raid 00:19:04.207 ************************************ 00:19:04.207 00:19:04.207 real 11m42.670s 00:19:04.207 user 15m53.222s 00:19:04.207 sys 1m47.800s 00:19:04.207 07:50:53 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.207 07:50:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.207 07:50:53 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:04.207 07:50:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:04.207 07:50:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.207 07:50:53 -- common/autotest_common.sh@10 -- # set +x 00:19:04.207 ************************************ 00:19:04.207 START TEST spdkcli_raid 00:19:04.207 ************************************ 00:19:04.207 07:50:53 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:04.207 * Looking for test storage... 00:19:04.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:04.207 07:50:54 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:04.207 07:50:54 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:04.207 07:50:54 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:04.468 07:50:54 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.468 07:50:54 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:04.468 07:50:54 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.468 07:50:54 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:04.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.468 --rc genhtml_branch_coverage=1 00:19:04.468 --rc genhtml_function_coverage=1 00:19:04.469 --rc genhtml_legend=1 00:19:04.469 --rc geninfo_all_blocks=1 00:19:04.469 --rc geninfo_unexecuted_blocks=1 00:19:04.469 00:19:04.469 ' 00:19:04.469 07:50:54 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:04.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.469 --rc genhtml_branch_coverage=1 00:19:04.469 --rc genhtml_function_coverage=1 00:19:04.469 --rc genhtml_legend=1 00:19:04.469 --rc geninfo_all_blocks=1 00:19:04.469 --rc geninfo_unexecuted_blocks=1 00:19:04.469 00:19:04.469 ' 00:19:04.469 07:50:54 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:04.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.469 --rc genhtml_branch_coverage=1 00:19:04.469 --rc genhtml_function_coverage=1 00:19:04.469 --rc genhtml_legend=1 00:19:04.469 --rc geninfo_all_blocks=1 00:19:04.469 --rc geninfo_unexecuted_blocks=1 00:19:04.469 00:19:04.469 ' 00:19:04.469 07:50:54 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:04.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.469 --rc genhtml_branch_coverage=1 00:19:04.469 --rc genhtml_function_coverage=1 00:19:04.469 --rc genhtml_legend=1 00:19:04.469 --rc geninfo_all_blocks=1 00:19:04.469 --rc geninfo_unexecuted_blocks=1 00:19:04.469 00:19:04.469 ' 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:04.469 07:50:54 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:04.469 07:50:54 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.469 07:50:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89378 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:04.469 07:50:54 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89378 00:19:04.469 07:50:54 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89378 ']' 00:19:04.469 07:50:54 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.469 07:50:54 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.469 07:50:54 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.469 07:50:54 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.469 07:50:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.469 [2024-11-29 07:50:54.322246] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:04.469 [2024-11-29 07:50:54.322364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89378 ] 00:19:04.729 [2024-11-29 07:50:54.497404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:04.729 [2024-11-29 07:50:54.630804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.729 [2024-11-29 07:50:54.630837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.668 07:50:55 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.668 07:50:55 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:05.668 07:50:55 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:05.668 07:50:55 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.668 07:50:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.927 07:50:55 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:05.927 07:50:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.927 07:50:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.927 07:50:55 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:05.927 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:05.927 ' 00:19:07.309 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:07.309 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:07.568 07:50:57 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:07.568 07:50:57 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:07.568 07:50:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.568 07:50:57 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:07.568 07:50:57 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.568 07:50:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.568 07:50:57 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:07.568 ' 00:19:08.508 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:08.767 07:50:58 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:08.768 07:50:58 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.768 07:50:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:08.768 07:50:58 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:08.768 07:50:58 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.768 07:50:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:08.768 07:50:58 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:08.768 07:50:58 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:09.338 07:50:59 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:09.338 07:50:59 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:09.338 07:50:59 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:09.338 07:50:59 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.338 07:50:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.338 07:50:59 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:09.338 07:50:59 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.338 07:50:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.338 07:50:59 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:09.338 ' 00:19:10.278 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:10.278 07:51:00 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:10.278 07:51:00 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.278 07:51:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.537 07:51:00 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:10.538 07:51:00 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.538 07:51:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.538 07:51:00 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:10.538 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:10.538 ' 00:19:11.918 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:11.918 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:11.918 07:51:01 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:11.918 07:51:01 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.918 07:51:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.918 07:51:01 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89378 00:19:11.918 07:51:01 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89378 ']' 00:19:11.918 07:51:01 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89378 00:19:11.918 07:51:01 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:11.918 07:51:01 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.918 07:51:01 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89378 00:19:11.918 07:51:01 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.918 07:51:01 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.918 07:51:01 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89378' 00:19:11.918 killing process with pid 89378 00:19:11.918 07:51:01 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89378 00:19:11.918 07:51:01 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89378 00:19:14.458 07:51:04 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:14.458 07:51:04 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89378 ']' 00:19:14.458 07:51:04 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89378 00:19:14.458 07:51:04 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89378 ']' 00:19:14.458 07:51:04 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89378 00:19:14.458 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89378) - No such process 00:19:14.458 07:51:04 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89378 is not found' 00:19:14.458 Process with pid 89378 is not found 00:19:14.458 07:51:04 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:14.458 07:51:04 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:14.458 07:51:04 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:14.458 07:51:04 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:14.458 00:19:14.458 real 0m10.392s 00:19:14.458 user 0m21.023s 00:19:14.458 sys 0m1.444s 00:19:14.458 07:51:04 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.458 07:51:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.458 ************************************ 00:19:14.458 END TEST spdkcli_raid 00:19:14.458 ************************************ 00:19:14.718 07:51:04 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:14.718 07:51:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:14.718 07:51:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.719 07:51:04 -- common/autotest_common.sh@10 -- # set +x 00:19:14.719 ************************************ 00:19:14.719 START TEST blockdev_raid5f 00:19:14.719 ************************************ 00:19:14.719 07:51:04 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:14.719 * Looking for test storage... 00:19:14.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:14.719 07:51:04 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:14.719 07:51:04 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:19:14.719 07:51:04 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:14.719 07:51:04 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.719 07:51:04 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:14.719 07:51:04 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.719 07:51:04 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:14.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.719 --rc genhtml_branch_coverage=1 00:19:14.719 --rc genhtml_function_coverage=1 00:19:14.719 --rc genhtml_legend=1 00:19:14.719 --rc geninfo_all_blocks=1 00:19:14.719 --rc geninfo_unexecuted_blocks=1 00:19:14.719 00:19:14.719 ' 00:19:14.719 07:51:04 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:14.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.719 --rc genhtml_branch_coverage=1 00:19:14.719 --rc genhtml_function_coverage=1 00:19:14.719 --rc genhtml_legend=1 00:19:14.719 --rc geninfo_all_blocks=1 00:19:14.719 --rc geninfo_unexecuted_blocks=1 00:19:14.719 00:19:14.719 ' 00:19:14.719 07:51:04 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:14.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.719 --rc genhtml_branch_coverage=1 00:19:14.719 --rc genhtml_function_coverage=1 00:19:14.719 --rc genhtml_legend=1 00:19:14.719 --rc geninfo_all_blocks=1 00:19:14.719 --rc geninfo_unexecuted_blocks=1 00:19:14.719 00:19:14.719 ' 00:19:14.719 07:51:04 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:14.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.719 --rc genhtml_branch_coverage=1 00:19:14.719 --rc genhtml_function_coverage=1 00:19:14.719 --rc genhtml_legend=1 00:19:14.719 --rc geninfo_all_blocks=1 00:19:14.719 --rc geninfo_unexecuted_blocks=1 00:19:14.719 00:19:14.719 ' 00:19:14.719 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89658 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:14.979 07:51:04 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89658 00:19:14.979 07:51:04 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89658 ']' 00:19:14.979 07:51:04 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.980 07:51:04 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.980 07:51:04 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.980 07:51:04 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.980 07:51:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:14.980 [2024-11-29 07:51:04.782932] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:14.980 [2024-11-29 07:51:04.783121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89658 ] 00:19:15.240 [2024-11-29 07:51:04.954793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.240 [2024-11-29 07:51:05.085703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.179 07:51:06 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.179 07:51:06 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:16.179 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:16.179 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:19:16.179 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:16.179 07:51:06 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.179 07:51:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.179 Malloc0 00:19:16.440 Malloc1 00:19:16.440 Malloc2 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "b7b65d20-53a4-4a0a-88ac-0827b6ca71e3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b7b65d20-53a4-4a0a-88ac-0827b6ca71e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "b7b65d20-53a4-4a0a-88ac-0827b6ca71e3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "e7e3480d-abfd-4e60-8ad6-5bc7bc947e0f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4b9f01a1-0ed3-48f2-ae4e-8adeff9ccb21",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "4c3d81b6-4c83-4bb8-ade1-8f29e2ae8c4c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:16.440 07:51:06 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 89658 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89658 ']' 00:19:16.440 07:51:06 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89658 00:19:16.700 07:51:06 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:16.700 07:51:06 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.700 07:51:06 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89658 00:19:16.700 killing process with pid 89658 00:19:16.700 07:51:06 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.700 07:51:06 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.700 07:51:06 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89658' 00:19:16.700 07:51:06 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89658 00:19:16.700 07:51:06 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89658 00:19:19.998 07:51:09 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:19.998 07:51:09 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:19.998 07:51:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:19.998 07:51:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.998 07:51:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:19.998 ************************************ 00:19:19.998 START TEST bdev_hello_world 00:19:19.998 ************************************ 00:19:19.998 07:51:09 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:19.998 [2024-11-29 07:51:09.303260] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:19.998 [2024-11-29 07:51:09.303454] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89725 ] 00:19:19.998 [2024-11-29 07:51:09.484492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.998 [2024-11-29 07:51:09.616833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.258 [2024-11-29 07:51:10.169937] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:20.258 [2024-11-29 07:51:10.169988] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:20.258 [2024-11-29 07:51:10.170004] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:20.258 [2024-11-29 07:51:10.170482] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:20.258 [2024-11-29 07:51:10.170621] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:20.258 [2024-11-29 07:51:10.170642] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:20.258 [2024-11-29 07:51:10.170688] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:20.258 00:19:20.258 [2024-11-29 07:51:10.170704] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:21.642 00:19:21.642 real 0m2.279s 00:19:21.642 user 0m1.868s 00:19:21.642 sys 0m0.289s 00:19:21.642 ************************************ 00:19:21.642 END TEST bdev_hello_world 00:19:21.642 ************************************ 00:19:21.642 07:51:11 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.642 07:51:11 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:21.642 07:51:11 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:21.642 07:51:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:21.642 07:51:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.642 07:51:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:21.642 ************************************ 00:19:21.642 START TEST bdev_bounds 00:19:21.642 ************************************ 00:19:21.642 07:51:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:21.642 07:51:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89767 00:19:21.642 07:51:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:21.642 Process bdevio pid: 89767 00:19:21.642 07:51:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:21.642 07:51:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89767' 00:19:21.642 07:51:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89767 00:19:21.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.642 07:51:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89767 ']' 00:19:21.642 07:51:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.642 07:51:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.642 07:51:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.642 07:51:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.642 07:51:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:21.902 [2024-11-29 07:51:11.658025] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:21.902 [2024-11-29 07:51:11.658223] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89767 ] 00:19:21.902 [2024-11-29 07:51:11.823723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:22.163 [2024-11-29 07:51:11.931490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.163 [2024-11-29 07:51:11.931641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.163 [2024-11-29 07:51:11.931677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.733 07:51:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.733 07:51:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:22.733 07:51:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:22.733 I/O targets: 00:19:22.733 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:22.733 00:19:22.733 00:19:22.733 CUnit - A unit testing framework for C - Version 2.1-3 00:19:22.733 http://cunit.sourceforge.net/ 00:19:22.733 00:19:22.733 00:19:22.733 Suite: bdevio tests on: raid5f 00:19:22.733 Test: blockdev write read block ...passed 00:19:22.733 Test: blockdev write zeroes read block ...passed 00:19:22.733 Test: blockdev write zeroes read no split ...passed 00:19:22.993 Test: blockdev write zeroes read split ...passed 00:19:22.993 Test: blockdev write zeroes read split partial ...passed 00:19:22.993 Test: blockdev reset ...passed 00:19:22.993 Test: blockdev write read 8 blocks ...passed 00:19:22.993 Test: blockdev write read size > 128k ...passed 00:19:22.993 Test: blockdev write read invalid size ...passed 00:19:22.993 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:22.993 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:22.993 Test: blockdev write read max offset ...passed 00:19:22.993 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:22.993 Test: blockdev writev readv 8 blocks ...passed 00:19:22.993 Test: blockdev writev readv 30 x 1block ...passed 00:19:22.993 Test: blockdev writev readv block ...passed 00:19:22.993 Test: blockdev writev readv size > 128k ...passed 00:19:22.993 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.993 Test: blockdev comparev and writev ...passed 00:19:22.993 Test: blockdev nvme passthru rw ...passed 00:19:22.993 Test: blockdev nvme passthru vendor specific ...passed 00:19:22.993 Test: blockdev nvme admin passthru ...passed 00:19:22.993 Test: blockdev copy ...passed 00:19:22.993 00:19:22.993 Run Summary: Type Total Ran Passed Failed Inactive 00:19:22.993 suites 1 1 n/a 0 0 00:19:22.993 tests 23 23 23 0 0 00:19:22.993 asserts 130 130 130 0 n/a 00:19:22.993 00:19:22.993 Elapsed time = 0.603 seconds 00:19:22.993 0 00:19:22.993 07:51:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89767 00:19:22.993 07:51:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89767 ']' 00:19:22.993 07:51:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89767 00:19:22.993 07:51:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:22.993 07:51:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.993 07:51:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89767 00:19:22.993 07:51:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.993 07:51:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.993 killing process with pid 89767 00:19:22.993 07:51:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89767' 00:19:22.993 07:51:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89767 00:19:22.993 07:51:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89767 00:19:24.376 ************************************ 00:19:24.376 END TEST bdev_bounds 00:19:24.376 ************************************ 00:19:24.376 07:51:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:24.376 00:19:24.376 real 0m2.680s 00:19:24.376 user 0m6.641s 00:19:24.376 sys 0m0.401s 00:19:24.376 07:51:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.376 07:51:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:24.376 07:51:14 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:24.376 07:51:14 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:24.376 07:51:14 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.376 07:51:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:24.637 ************************************ 00:19:24.637 START TEST bdev_nbd 00:19:24.637 ************************************ 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89827 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:24.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89827 /var/tmp/spdk-nbd.sock 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89827 ']' 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.637 07:51:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:24.637 [2024-11-29 07:51:14.426258] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:24.637 [2024-11-29 07:51:14.426438] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.898 [2024-11-29 07:51:14.601247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.898 [2024-11-29 07:51:14.708035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:25.468 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:25.728 1+0 records in 00:19:25.728 1+0 records out 00:19:25.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405072 s, 10.1 MB/s 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:25.728 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:25.991 { 00:19:25.991 "nbd_device": "/dev/nbd0", 00:19:25.991 "bdev_name": "raid5f" 00:19:25.991 } 00:19:25.991 ]' 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:25.991 { 00:19:25.991 "nbd_device": "/dev/nbd0", 00:19:25.991 "bdev_name": "raid5f" 00:19:25.991 } 00:19:25.991 ]' 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.991 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:26.258 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:26.258 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:26.258 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:26.258 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.258 07:51:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:26.258 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:26.258 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:26.258 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:26.258 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:26.558 /dev/nbd0 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.558 1+0 records in 00:19:26.558 1+0 records out 00:19:26.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515671 s, 7.9 MB/s 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.558 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:26.820 { 00:19:26.820 "nbd_device": "/dev/nbd0", 00:19:26.820 "bdev_name": "raid5f" 00:19:26.820 } 00:19:26.820 ]' 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:26.820 { 00:19:26.820 "nbd_device": "/dev/nbd0", 00:19:26.820 "bdev_name": "raid5f" 00:19:26.820 } 00:19:26.820 ]' 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:26.820 256+0 records in 00:19:26.820 256+0 records out 00:19:26.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442931 s, 237 MB/s 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:26.820 256+0 records in 00:19:26.820 256+0 records out 00:19:26.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311318 s, 33.7 MB/s 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:26.820 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:27.080 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:27.080 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:27.080 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.081 07:51:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:27.341 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:27.602 malloc_lvol_verify 00:19:27.602 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:27.862 fed823db-79b4-4c3a-a0ea-2b3e4697b1d5 00:19:27.862 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:28.123 360f927b-6f87-418d-af2c-8c3137cc09a0 00:19:28.123 07:51:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:28.123 /dev/nbd0 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:28.383 mke2fs 1.47.0 (5-Feb-2023) 00:19:28.383 Discarding device blocks: 0/4096 done 00:19:28.383 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:28.383 00:19:28.383 Allocating group tables: 0/1 done 00:19:28.383 Writing inode tables: 0/1 done 00:19:28.383 Creating journal (1024 blocks): done 00:19:28.383 Writing superblocks and filesystem accounting information: 0/1 done 00:19:28.383 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89827 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89827 ']' 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89827 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.383 07:51:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89827 00:19:28.643 07:51:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.643 07:51:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.643 killing process with pid 89827 00:19:28.643 07:51:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89827' 00:19:28.643 07:51:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89827 00:19:28.643 07:51:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89827 00:19:30.025 07:51:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:30.025 00:19:30.025 real 0m5.554s 00:19:30.025 user 0m7.472s 00:19:30.025 sys 0m1.252s 00:19:30.025 07:51:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.025 07:51:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:30.025 ************************************ 00:19:30.025 END TEST bdev_nbd 00:19:30.025 ************************************ 00:19:30.025 07:51:19 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:30.025 07:51:19 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:19:30.025 07:51:19 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:19:30.025 07:51:19 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:30.025 07:51:19 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:30.025 07:51:19 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.025 07:51:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:30.025 ************************************ 00:19:30.025 START TEST bdev_fio 00:19:30.025 ************************************ 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:30.025 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:30.025 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:30.286 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:30.286 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:30.286 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:30.286 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:30.286 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:30.286 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:30.286 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:30.286 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:30.286 07:51:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:30.286 ************************************ 00:19:30.286 START TEST bdev_fio_rw_verify 00:19:30.286 ************************************ 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:30.286 07:51:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:30.547 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:30.547 fio-3.35 00:19:30.547 Starting 1 thread 00:19:42.773 00:19:42.773 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90028: Fri Nov 29 07:51:31 2024 00:19:42.773 read: IOPS=12.5k, BW=49.0MiB/s (51.4MB/s)(490MiB/10001msec) 00:19:42.773 slat (usec): min=17, max=104, avg=19.35, stdev= 1.52 00:19:42.773 clat (usec): min=10, max=301, avg=128.55, stdev=45.56 00:19:42.773 lat (usec): min=29, max=323, avg=147.89, stdev=45.71 00:19:42.773 clat percentiles (usec): 00:19:42.773 | 50.000th=[ 133], 99.000th=[ 206], 99.900th=[ 229], 99.990th=[ 258], 00:19:42.773 | 99.999th=[ 281] 00:19:42.773 write: IOPS=13.1k, BW=51.2MiB/s (53.6MB/s)(505MiB/9876msec); 0 zone resets 00:19:42.773 slat (usec): min=7, max=313, avg=15.87, stdev= 3.32 00:19:42.773 clat (usec): min=58, max=644, avg=293.91, stdev=34.27 00:19:42.773 lat (usec): min=73, max=659, avg=309.78, stdev=34.62 00:19:42.773 clat percentiles (usec): 00:19:42.773 | 50.000th=[ 297], 99.000th=[ 359], 99.900th=[ 396], 99.990th=[ 545], 00:19:42.774 | 99.999th=[ 644] 00:19:42.774 bw ( KiB/s): min=49448, max=54288, per=98.86%, avg=51794.11, stdev=1243.98, samples=19 00:19:42.774 iops : min=12362, max=13572, avg=12948.53, stdev=310.99, samples=19 00:19:42.774 lat (usec) : 20=0.01%, 50=0.01%, 100=17.29%, 250=38.93%, 500=43.77% 00:19:42.774 lat (usec) : 750=0.01% 00:19:42.774 cpu : usr=99.09%, sys=0.26%, ctx=41, majf=0, minf=10222 00:19:42.774 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:42.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.774 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.774 issued rwts: total=125394,129354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.774 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:42.774 00:19:42.774 Run status group 0 (all jobs): 00:19:42.774 READ: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=490MiB (514MB), run=10001-10001msec 00:19:42.774 WRITE: bw=51.2MiB/s (53.6MB/s), 51.2MiB/s-51.2MiB/s (53.6MB/s-53.6MB/s), io=505MiB (530MB), run=9876-9876msec 00:19:43.034 ----------------------------------------------------- 00:19:43.034 Suppressions used: 00:19:43.034 count bytes template 00:19:43.034 1 7 /usr/src/fio/parse.c 00:19:43.034 75 7200 /usr/src/fio/iolog.c 00:19:43.034 1 8 libtcmalloc_minimal.so 00:19:43.034 1 904 libcrypto.so 00:19:43.034 ----------------------------------------------------- 00:19:43.034 00:19:43.034 00:19:43.034 real 0m12.836s 00:19:43.034 user 0m13.031s 00:19:43.034 sys 0m0.729s 00:19:43.034 07:51:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.034 07:51:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:43.034 ************************************ 00:19:43.034 END TEST bdev_fio_rw_verify 00:19:43.034 ************************************ 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:43.294 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:43.295 07:51:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "b7b65d20-53a4-4a0a-88ac-0827b6ca71e3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b7b65d20-53a4-4a0a-88ac-0827b6ca71e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "b7b65d20-53a4-4a0a-88ac-0827b6ca71e3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "e7e3480d-abfd-4e60-8ad6-5bc7bc947e0f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4b9f01a1-0ed3-48f2-ae4e-8adeff9ccb21",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "4c3d81b6-4c83-4bb8-ade1-8f29e2ae8c4c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:43.295 07:51:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:43.295 07:51:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:43.295 07:51:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:43.295 07:51:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:43.295 /home/vagrant/spdk_repo/spdk 00:19:43.295 07:51:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:43.295 07:51:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:43.295 00:19:43.295 real 0m13.130s 00:19:43.295 user 0m13.155s 00:19:43.295 sys 0m0.872s 00:19:43.295 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.295 07:51:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:43.295 ************************************ 00:19:43.295 END TEST bdev_fio 00:19:43.295 ************************************ 00:19:43.295 07:51:33 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:43.295 07:51:33 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:43.295 07:51:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:43.295 07:51:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.295 07:51:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:43.295 ************************************ 00:19:43.295 START TEST bdev_verify 00:19:43.295 ************************************ 00:19:43.295 07:51:33 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:43.555 [2024-11-29 07:51:33.250140] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:43.555 [2024-11-29 07:51:33.250257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90191 ] 00:19:43.555 [2024-11-29 07:51:33.426595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:43.814 [2024-11-29 07:51:33.563197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.814 [2024-11-29 07:51:33.563227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.385 Running I/O for 5 seconds... 00:19:46.273 10497.00 IOPS, 41.00 MiB/s [2024-11-29T07:51:37.599Z] 10559.50 IOPS, 41.25 MiB/s [2024-11-29T07:51:38.539Z] 10615.33 IOPS, 41.47 MiB/s [2024-11-29T07:51:39.474Z] 10619.00 IOPS, 41.48 MiB/s [2024-11-29T07:51:39.474Z] 10598.60 IOPS, 41.40 MiB/s 00:19:49.529 Latency(us) 00:19:49.529 [2024-11-29T07:51:39.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.529 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:49.529 Verification LBA range: start 0x0 length 0x2000 00:19:49.529 raid5f : 5.02 6395.85 24.98 0.00 0.00 30175.19 364.88 22207.83 00:19:49.529 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:49.529 Verification LBA range: start 0x2000 length 0x2000 00:19:49.529 raid5f : 5.03 4205.81 16.43 0.00 0.00 45781.65 124.31 32739.38 00:19:49.529 [2024-11-29T07:51:39.474Z] =================================================================================================================== 00:19:49.529 [2024-11-29T07:51:39.474Z] Total : 10601.66 41.41 0.00 0.00 36370.76 124.31 32739.38 00:19:50.908 00:19:50.908 real 0m7.505s 00:19:50.908 user 0m13.775s 00:19:50.908 sys 0m0.373s 00:19:50.908 07:51:40 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.908 07:51:40 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:50.908 ************************************ 00:19:50.908 END TEST bdev_verify 00:19:50.908 ************************************ 00:19:50.908 07:51:40 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:50.908 07:51:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:50.908 07:51:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.908 07:51:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:50.908 ************************************ 00:19:50.908 START TEST bdev_verify_big_io 00:19:50.908 ************************************ 00:19:50.908 07:51:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:50.908 [2024-11-29 07:51:40.841599] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:50.908 [2024-11-29 07:51:40.841709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90291 ] 00:19:51.167 [2024-11-29 07:51:41.023513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:51.426 [2024-11-29 07:51:41.159005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.426 [2024-11-29 07:51:41.159035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.994 Running I/O for 5 seconds... 00:19:53.873 633.00 IOPS, 39.56 MiB/s [2024-11-29T07:51:45.208Z] 728.50 IOPS, 45.53 MiB/s [2024-11-29T07:51:46.150Z] 739.67 IOPS, 46.23 MiB/s [2024-11-29T07:51:47.089Z] 745.25 IOPS, 46.58 MiB/s [2024-11-29T07:51:47.089Z] 761.20 IOPS, 47.58 MiB/s 00:19:57.144 Latency(us) 00:19:57.144 [2024-11-29T07:51:47.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.144 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:57.144 Verification LBA range: start 0x0 length 0x200 00:19:57.144 raid5f : 5.18 441.04 27.56 0.00 0.00 7310190.40 206.59 311367.55 00:19:57.144 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:57.144 Verification LBA range: start 0x200 length 0x200 00:19:57.144 raid5f : 5.29 335.80 20.99 0.00 0.00 9471127.54 203.01 408440.96 00:19:57.144 [2024-11-29T07:51:47.089Z] =================================================================================================================== 00:19:57.144 [2024-11-29T07:51:47.089Z] Total : 776.84 48.55 0.00 0.00 8255001.91 203.01 408440.96 00:19:59.054 00:19:59.054 real 0m7.781s 00:19:59.054 user 0m14.332s 00:19:59.054 sys 0m0.360s 00:19:59.054 07:51:48 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.054 07:51:48 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:59.054 ************************************ 00:19:59.054 END TEST bdev_verify_big_io 00:19:59.054 ************************************ 00:19:59.054 07:51:48 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:59.054 07:51:48 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:59.054 07:51:48 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.054 07:51:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:59.054 ************************************ 00:19:59.054 START TEST bdev_write_zeroes 00:19:59.054 ************************************ 00:19:59.054 07:51:48 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:59.054 [2024-11-29 07:51:48.690138] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:59.054 [2024-11-29 07:51:48.690247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90389 ] 00:19:59.054 [2024-11-29 07:51:48.865836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.315 [2024-11-29 07:51:48.999237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.886 Running I/O for 1 seconds... 00:20:00.825 29583.00 IOPS, 115.56 MiB/s 00:20:00.826 Latency(us) 00:20:00.826 [2024-11-29T07:51:50.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.826 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.826 raid5f : 1.01 29544.34 115.41 0.00 0.00 4318.78 1495.31 5838.14 00:20:00.826 [2024-11-29T07:51:50.771Z] =================================================================================================================== 00:20:00.826 [2024-11-29T07:51:50.771Z] Total : 29544.34 115.41 0.00 0.00 4318.78 1495.31 5838.14 00:20:02.237 00:20:02.237 real 0m3.478s 00:20:02.237 user 0m2.995s 00:20:02.237 sys 0m0.355s 00:20:02.237 07:51:52 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.237 07:51:52 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:02.237 ************************************ 00:20:02.237 END TEST bdev_write_zeroes 00:20:02.237 ************************************ 00:20:02.237 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:02.237 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:02.237 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.237 07:51:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:02.237 ************************************ 00:20:02.237 START TEST bdev_json_nonenclosed 00:20:02.237 ************************************ 00:20:02.237 07:51:52 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:02.497 [2024-11-29 07:51:52.241856] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:02.497 [2024-11-29 07:51:52.241965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90449 ] 00:20:02.497 [2024-11-29 07:51:52.415031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.757 [2024-11-29 07:51:52.543382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.757 [2024-11-29 07:51:52.543484] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:02.758 [2024-11-29 07:51:52.543513] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:02.758 [2024-11-29 07:51:52.543524] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:03.018 00:20:03.018 real 0m0.642s 00:20:03.018 user 0m0.411s 00:20:03.018 sys 0m0.127s 00:20:03.018 07:51:52 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.018 07:51:52 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:03.018 ************************************ 00:20:03.018 END TEST bdev_json_nonenclosed 00:20:03.018 ************************************ 00:20:03.018 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:03.018 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:03.018 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.018 07:51:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:03.018 ************************************ 00:20:03.018 START TEST bdev_json_nonarray 00:20:03.018 ************************************ 00:20:03.018 07:51:52 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:03.018 [2024-11-29 07:51:52.957472] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:03.018 [2024-11-29 07:51:52.957581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90480 ] 00:20:03.278 [2024-11-29 07:51:53.132420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.538 [2024-11-29 07:51:53.267362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.538 [2024-11-29 07:51:53.267475] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:03.538 [2024-11-29 07:51:53.267494] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:03.538 [2024-11-29 07:51:53.267513] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:03.800 00:20:03.800 real 0m0.656s 00:20:03.800 user 0m0.419s 00:20:03.800 sys 0m0.132s 00:20:03.800 07:51:53 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.800 07:51:53 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:03.800 ************************************ 00:20:03.800 END TEST bdev_json_nonarray 00:20:03.800 ************************************ 00:20:03.800 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:20:03.800 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:20:03.800 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:20:03.800 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:03.800 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:20:03.800 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:03.800 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:03.800 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:03.800 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:03.800 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:03.800 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:03.800 00:20:03.800 real 0m49.170s 00:20:03.800 user 1m5.672s 00:20:03.800 sys 0m5.462s 00:20:03.800 07:51:53 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.800 07:51:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:03.800 ************************************ 00:20:03.800 END TEST blockdev_raid5f 00:20:03.800 ************************************ 00:20:03.800 07:51:53 -- spdk/autotest.sh@194 -- # uname -s 00:20:03.800 07:51:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:03.800 07:51:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:03.800 07:51:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:03.800 07:51:53 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:03.800 07:51:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:03.800 07:51:53 -- common/autotest_common.sh@10 -- # set +x 00:20:03.800 07:51:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:03.800 07:51:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:03.800 07:51:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:03.800 07:51:53 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:03.800 07:51:53 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:03.800 07:51:53 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:03.800 07:51:53 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:03.800 07:51:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.800 07:51:53 -- common/autotest_common.sh@10 -- # set +x 00:20:03.800 07:51:53 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:03.800 07:51:53 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:03.800 07:51:53 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:03.800 07:51:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.344 INFO: APP EXITING 00:20:06.344 INFO: killing all VMs 00:20:06.344 INFO: killing vhost app 00:20:06.344 INFO: EXIT DONE 00:20:06.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:06.605 Waiting for block devices as requested 00:20:06.866 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:06.866 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:07.808 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:07.808 Cleaning 00:20:07.808 Removing: /var/run/dpdk/spdk0/config 00:20:07.808 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:07.808 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:07.808 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:07.808 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:07.808 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:07.808 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:07.808 Removing: /dev/shm/spdk_tgt_trace.pid56840 00:20:07.808 Removing: /var/run/dpdk/spdk0 00:20:07.808 Removing: /var/run/dpdk/spdk_pid56605 00:20:07.808 Removing: /var/run/dpdk/spdk_pid56840 00:20:07.808 Removing: /var/run/dpdk/spdk_pid57069 00:20:07.808 Removing: /var/run/dpdk/spdk_pid57179 00:20:07.808 Removing: /var/run/dpdk/spdk_pid57224 00:20:07.808 Removing: /var/run/dpdk/spdk_pid57358 00:20:07.808 Removing: /var/run/dpdk/spdk_pid57381 00:20:07.808 Removing: /var/run/dpdk/spdk_pid57586 00:20:07.808 Removing: /var/run/dpdk/spdk_pid57697 00:20:07.808 Removing: /var/run/dpdk/spdk_pid57803 00:20:07.808 Removing: /var/run/dpdk/spdk_pid57921 00:20:07.808 Removing: /var/run/dpdk/spdk_pid58029 00:20:07.808 Removing: /var/run/dpdk/spdk_pid58068 00:20:07.808 Removing: /var/run/dpdk/spdk_pid58105 00:20:07.808 Removing: /var/run/dpdk/spdk_pid58181 00:20:07.808 Removing: /var/run/dpdk/spdk_pid58298 00:20:07.808 Removing: /var/run/dpdk/spdk_pid58740 00:20:07.808 Removing: /var/run/dpdk/spdk_pid58815 00:20:07.808 Removing: /var/run/dpdk/spdk_pid58883 00:20:07.808 Removing: /var/run/dpdk/spdk_pid58905 00:20:07.808 Removing: /var/run/dpdk/spdk_pid59044 00:20:07.808 Removing: /var/run/dpdk/spdk_pid59066 00:20:08.069 Removing: /var/run/dpdk/spdk_pid59209 00:20:08.069 Removing: /var/run/dpdk/spdk_pid59225 00:20:08.069 Removing: /var/run/dpdk/spdk_pid59299 00:20:08.069 Removing: /var/run/dpdk/spdk_pid59321 00:20:08.069 Removing: /var/run/dpdk/spdk_pid59386 00:20:08.069 Removing: /var/run/dpdk/spdk_pid59404 00:20:08.069 Removing: /var/run/dpdk/spdk_pid59605 00:20:08.069 Removing: /var/run/dpdk/spdk_pid59636 00:20:08.069 Removing: /var/run/dpdk/spdk_pid59725 00:20:08.069 Removing: /var/run/dpdk/spdk_pid61055 00:20:08.069 Removing: /var/run/dpdk/spdk_pid61261 00:20:08.069 Removing: /var/run/dpdk/spdk_pid61407 00:20:08.069 Removing: /var/run/dpdk/spdk_pid62039 00:20:08.069 Removing: /var/run/dpdk/spdk_pid62251 00:20:08.069 Removing: /var/run/dpdk/spdk_pid62391 00:20:08.069 Removing: /var/run/dpdk/spdk_pid63034 00:20:08.069 Removing: /var/run/dpdk/spdk_pid63359 00:20:08.069 Removing: /var/run/dpdk/spdk_pid63499 00:20:08.069 Removing: /var/run/dpdk/spdk_pid64878 00:20:08.069 Removing: /var/run/dpdk/spdk_pid65126 00:20:08.069 Removing: /var/run/dpdk/spdk_pid65277 00:20:08.069 Removing: /var/run/dpdk/spdk_pid66652 00:20:08.069 Removing: /var/run/dpdk/spdk_pid66905 00:20:08.069 Removing: /var/run/dpdk/spdk_pid67053 00:20:08.069 Removing: /var/run/dpdk/spdk_pid68427 00:20:08.069 Removing: /var/run/dpdk/spdk_pid68867 00:20:08.069 Removing: /var/run/dpdk/spdk_pid69013 00:20:08.069 Removing: /var/run/dpdk/spdk_pid70498 00:20:08.069 Removing: /var/run/dpdk/spdk_pid70759 00:20:08.069 Removing: /var/run/dpdk/spdk_pid70905 00:20:08.069 Removing: /var/run/dpdk/spdk_pid72394 00:20:08.069 Removing: /var/run/dpdk/spdk_pid72656 00:20:08.069 Removing: /var/run/dpdk/spdk_pid72804 00:20:08.069 Removing: /var/run/dpdk/spdk_pid74286 00:20:08.069 Removing: /var/run/dpdk/spdk_pid74773 00:20:08.069 Removing: /var/run/dpdk/spdk_pid74919 00:20:08.069 Removing: /var/run/dpdk/spdk_pid75057 00:20:08.069 Removing: /var/run/dpdk/spdk_pid75475 00:20:08.069 Removing: /var/run/dpdk/spdk_pid76194 00:20:08.069 Removing: /var/run/dpdk/spdk_pid76571 00:20:08.069 Removing: /var/run/dpdk/spdk_pid77266 00:20:08.069 Removing: /var/run/dpdk/spdk_pid77701 00:20:08.069 Removing: /var/run/dpdk/spdk_pid78452 00:20:08.069 Removing: /var/run/dpdk/spdk_pid78880 00:20:08.069 Removing: /var/run/dpdk/spdk_pid80827 00:20:08.069 Removing: /var/run/dpdk/spdk_pid81271 00:20:08.069 Removing: /var/run/dpdk/spdk_pid81711 00:20:08.069 Removing: /var/run/dpdk/spdk_pid83785 00:20:08.069 Removing: /var/run/dpdk/spdk_pid84266 00:20:08.069 Removing: /var/run/dpdk/spdk_pid84789 00:20:08.069 Removing: /var/run/dpdk/spdk_pid85846 00:20:08.069 Removing: /var/run/dpdk/spdk_pid86169 00:20:08.069 Removing: /var/run/dpdk/spdk_pid87106 00:20:08.069 Removing: /var/run/dpdk/spdk_pid87434 00:20:08.069 Removing: /var/run/dpdk/spdk_pid88369 00:20:08.069 Removing: /var/run/dpdk/spdk_pid88695 00:20:08.330 Removing: /var/run/dpdk/spdk_pid89378 00:20:08.330 Removing: /var/run/dpdk/spdk_pid89658 00:20:08.330 Removing: /var/run/dpdk/spdk_pid89725 00:20:08.330 Removing: /var/run/dpdk/spdk_pid89767 00:20:08.330 Removing: /var/run/dpdk/spdk_pid90013 00:20:08.330 Removing: /var/run/dpdk/spdk_pid90191 00:20:08.330 Removing: /var/run/dpdk/spdk_pid90291 00:20:08.330 Removing: /var/run/dpdk/spdk_pid90389 00:20:08.330 Removing: /var/run/dpdk/spdk_pid90449 00:20:08.330 Removing: /var/run/dpdk/spdk_pid90480 00:20:08.330 Clean 00:20:08.330 07:51:58 -- common/autotest_common.sh@1453 -- # return 0 00:20:08.330 07:51:58 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:08.330 07:51:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.330 07:51:58 -- common/autotest_common.sh@10 -- # set +x 00:20:08.330 07:51:58 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:08.330 07:51:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.330 07:51:58 -- common/autotest_common.sh@10 -- # set +x 00:20:08.330 07:51:58 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:08.330 07:51:58 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:08.330 07:51:58 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:08.330 07:51:58 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:08.330 07:51:58 -- spdk/autotest.sh@398 -- # hostname 00:20:08.330 07:51:58 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:08.590 geninfo: WARNING: invalid characters removed from testname! 00:20:35.170 07:52:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:35.170 07:52:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:36.549 07:52:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:38.459 07:52:28 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:40.371 07:52:30 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:42.282 07:52:31 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:44.193 07:52:33 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:44.193 07:52:33 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:44.193 07:52:33 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:44.193 07:52:33 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:44.193 07:52:33 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:44.193 07:52:33 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:44.193 + [[ -n 5419 ]] 00:20:44.193 + sudo kill 5419 00:20:44.204 [Pipeline] } 00:20:44.220 [Pipeline] // timeout 00:20:44.225 [Pipeline] } 00:20:44.244 [Pipeline] // stage 00:20:44.249 [Pipeline] } 00:20:44.263 [Pipeline] // catchError 00:20:44.271 [Pipeline] stage 00:20:44.273 [Pipeline] { (Stop VM) 00:20:44.288 [Pipeline] sh 00:20:44.577 + vagrant halt 00:20:47.119 ==> default: Halting domain... 00:20:55.271 [Pipeline] sh 00:20:55.604 + vagrant destroy -f 00:20:58.149 ==> default: Removing domain... 00:20:58.163 [Pipeline] sh 00:20:58.449 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:58.459 [Pipeline] } 00:20:58.476 [Pipeline] // stage 00:20:58.481 [Pipeline] } 00:20:58.496 [Pipeline] // dir 00:20:58.501 [Pipeline] } 00:20:58.516 [Pipeline] // wrap 00:20:58.522 [Pipeline] } 00:20:58.535 [Pipeline] // catchError 00:20:58.546 [Pipeline] stage 00:20:58.549 [Pipeline] { (Epilogue) 00:20:58.562 [Pipeline] sh 00:20:58.849 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:03.062 [Pipeline] catchError 00:21:03.064 [Pipeline] { 00:21:03.078 [Pipeline] sh 00:21:03.368 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:03.369 Artifacts sizes are good 00:21:03.379 [Pipeline] } 00:21:03.393 [Pipeline] // catchError 00:21:03.404 [Pipeline] archiveArtifacts 00:21:03.412 Archiving artifacts 00:21:03.534 [Pipeline] cleanWs 00:21:03.550 [WS-CLEANUP] Deleting project workspace... 00:21:03.550 [WS-CLEANUP] Deferred wipeout is used... 00:21:03.558 [WS-CLEANUP] done 00:21:03.560 [Pipeline] } 00:21:03.576 [Pipeline] // stage 00:21:03.583 [Pipeline] } 00:21:03.598 [Pipeline] // node 00:21:03.603 [Pipeline] End of Pipeline 00:21:03.643 Finished: SUCCESS